00:00:00.000 Started by upstream project "autotest-per-patch" build number 132414 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.186 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.245 Using shallow fetch with depth 1 00:00:00.245 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.245 > git --version # timeout=10 00:00:00.285 > git --version # 'git version 2.39.2' 00:00:00.285 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.400 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.412 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.425 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.425 > git config core.sparsecheckout # timeout=10 00:00:04.440 > git read-tree -mu HEAD # timeout=10 00:00:04.458 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.485 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.485 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.588 [Pipeline] Start of Pipeline 00:00:04.598 [Pipeline] library 00:00:04.600 Loading library shm_lib@master 00:00:04.600 Library shm_lib@master is cached. Copying from home. 00:00:04.614 [Pipeline] node 00:00:04.622 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.624 [Pipeline] { 00:00:04.631 [Pipeline] catchError 00:00:04.633 [Pipeline] { 00:00:04.641 [Pipeline] wrap 00:00:04.647 [Pipeline] { 00:00:04.655 [Pipeline] stage 00:00:04.657 [Pipeline] { (Prologue) 00:00:04.870 [Pipeline] sh 00:00:05.155 + logger -p user.info -t JENKINS-CI 00:00:05.172 [Pipeline] echo 00:00:05.173 Node: CYP9 00:00:05.180 [Pipeline] sh 00:00:05.481 [Pipeline] setCustomBuildProperty 00:00:05.489 [Pipeline] echo 00:00:05.491 Cleanup processes 00:00:05.494 [Pipeline] sh 00:00:05.776 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.776 1633296 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.788 [Pipeline] sh 00:00:06.075 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.075 ++ grep -v 'sudo pgrep' 00:00:06.075 ++ awk '{print $1}' 00:00:06.075 + sudo kill -9 00:00:06.075 + true 00:00:06.090 [Pipeline] cleanWs 00:00:06.100 [WS-CLEANUP] Deleting project workspace... 00:00:06.100 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.105 [WS-CLEANUP] done 00:00:06.109 [Pipeline] setCustomBuildProperty 00:00:06.124 [Pipeline] sh 00:00:06.408 + sudo git config --global --replace-all safe.directory '*' 00:00:06.490 [Pipeline] httpRequest 00:00:06.842 [Pipeline] echo 00:00:06.843 Sorcerer 10.211.164.20 is alive 00:00:06.850 [Pipeline] retry 00:00:06.851 [Pipeline] { 00:00:06.862 [Pipeline] httpRequest 00:00:06.867 HttpMethod: GET 00:00:06.867 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.868 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.879 Response Code: HTTP/1.1 200 OK 00:00:06.880 Success: Status code 200 is in the accepted range: 200,404 00:00:06.880 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.198 [Pipeline] } 00:00:10.216 [Pipeline] // retry 00:00:10.224 [Pipeline] sh 00:00:10.512 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.527 [Pipeline] httpRequest 00:00:11.154 [Pipeline] echo 00:00:11.156 Sorcerer 10.211.164.20 is alive 00:00:11.166 [Pipeline] retry 00:00:11.168 [Pipeline] { 00:00:11.182 [Pipeline] httpRequest 00:00:11.187 HttpMethod: GET 00:00:11.187 URL: http://10.211.164.20/packages/spdk_325a79ea32598450a494aa701f4471855261d3cd.tar.gz 00:00:11.188 Sending request to url: http://10.211.164.20/packages/spdk_325a79ea32598450a494aa701f4471855261d3cd.tar.gz 00:00:11.193 Response Code: HTTP/1.1 200 OK 00:00:11.193 Success: Status code 200 is in the accepted range: 200,404 00:00:11.193 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_325a79ea32598450a494aa701f4471855261d3cd.tar.gz 00:01:33.029 [Pipeline] } 00:01:33.052 [Pipeline] // retry 00:01:33.061 [Pipeline] sh 00:01:33.355 + tar --no-same-owner -xf spdk_325a79ea32598450a494aa701f4471855261d3cd.tar.gz 00:01:35.917 [Pipeline] sh 00:01:36.206 + git -C spdk log --oneline -n5 00:01:36.206 325a79ea3 bdev/malloc: Support accel sequence when DIF is enabled 00:01:36.206 0b4b4be7e bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:01:36.206 5200caf0b bdev/malloc: Extract internal of verify_pi() for code reuse 00:01:36.206 82b85d9ca bdev/malloc: malloc_done() uses switch-case for clean up 00:01:36.206 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:01:36.219 [Pipeline] } 00:01:36.235 [Pipeline] // stage 00:01:36.246 [Pipeline] stage 00:01:36.249 [Pipeline] { (Prepare) 00:01:36.268 [Pipeline] writeFile 00:01:36.286 [Pipeline] sh 00:01:36.573 + logger -p user.info -t JENKINS-CI 00:01:36.589 [Pipeline] sh 00:01:36.881 + logger -p user.info -t JENKINS-CI 00:01:36.895 [Pipeline] sh 00:01:37.183 + cat autorun-spdk.conf 00:01:37.183 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.183 SPDK_TEST_NVMF=1 00:01:37.183 SPDK_TEST_NVME_CLI=1 00:01:37.183 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.183 SPDK_TEST_NVMF_NICS=e810 00:01:37.183 SPDK_TEST_VFIOUSER=1 00:01:37.183 SPDK_RUN_UBSAN=1 00:01:37.183 NET_TYPE=phy 00:01:37.191 RUN_NIGHTLY=0 00:01:37.196 [Pipeline] readFile 00:01:37.241 [Pipeline] withEnv 00:01:37.244 [Pipeline] { 00:01:37.261 [Pipeline] sh 00:01:37.544 + set -ex 00:01:37.544 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:37.544 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.544 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.544 ++ SPDK_TEST_NVMF=1 00:01:37.544 ++ SPDK_TEST_NVME_CLI=1 00:01:37.544 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.544 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.544 ++ SPDK_TEST_VFIOUSER=1 00:01:37.544 ++ SPDK_RUN_UBSAN=1 00:01:37.544 ++ NET_TYPE=phy 00:01:37.544 ++ RUN_NIGHTLY=0 00:01:37.544 + case $SPDK_TEST_NVMF_NICS in 00:01:37.544 + DRIVERS=ice 00:01:37.544 + [[ tcp == \r\d\m\a ]] 00:01:37.544 + [[ -n ice ]] 00:01:37.544 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:37.544 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:37.544 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:37.544 rmmod: ERROR: Module irdma is not currently loaded 00:01:37.544 rmmod: ERROR: Module i40iw is not currently loaded 00:01:37.544 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:37.544 + true 00:01:37.544 + for D in $DRIVERS 00:01:37.544 + sudo modprobe ice 00:01:37.544 + exit 0 00:01:37.555 [Pipeline] } 00:01:37.571 [Pipeline] // withEnv 00:01:37.576 [Pipeline] } 00:01:37.592 [Pipeline] // stage 00:01:37.603 [Pipeline] catchError 00:01:37.605 [Pipeline] { 00:01:37.619 [Pipeline] timeout 00:01:37.619 Timeout set to expire in 1 hr 0 min 00:01:37.621 [Pipeline] { 00:01:37.633 [Pipeline] stage 00:01:37.635 [Pipeline] { (Tests) 00:01:37.649 [Pipeline] sh 00:01:37.943 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.943 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.943 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.943 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:37.943 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.943 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:37.943 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:37.943 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:37.943 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:37.943 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:37.943 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:37.943 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.943 + source /etc/os-release 00:01:37.943 ++ NAME='Fedora Linux' 00:01:37.943 ++ VERSION='39 (Cloud Edition)' 00:01:37.943 ++ ID=fedora 00:01:37.943 ++ VERSION_ID=39 00:01:37.943 ++ VERSION_CODENAME= 00:01:37.943 ++ PLATFORM_ID=platform:f39 00:01:37.943 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:37.943 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.943 ++ LOGO=fedora-logo-icon 00:01:37.943 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:37.943 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:37.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:37.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:37.943 ++ SUPPORT_END=2024-11-12 00:01:37.943 ++ VARIANT='Cloud Edition' 00:01:37.943 ++ VARIANT_ID=cloud 00:01:37.943 + uname -a 00:01:37.943 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:37.943 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:41.247 Hugepages 00:01:41.247 node hugesize free / total 00:01:41.247 node0 1048576kB 0 / 0 00:01:41.247 node0 2048kB 0 / 0 00:01:41.247 node1 1048576kB 0 / 0 00:01:41.247 node1 2048kB 0 / 0 00:01:41.247 00:01:41.247 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.247 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:41.247 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:41.247 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:41.247 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:41.247 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:41.247 + rm -f /tmp/spdk-ld-path 00:01:41.247 + source autorun-spdk.conf 00:01:41.247 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.247 ++ SPDK_TEST_NVMF=1 00:01:41.247 ++ SPDK_TEST_NVME_CLI=1 00:01:41.247 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.247 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.247 ++ SPDK_TEST_VFIOUSER=1 00:01:41.247 ++ SPDK_RUN_UBSAN=1 00:01:41.247 ++ NET_TYPE=phy 00:01:41.247 ++ RUN_NIGHTLY=0 00:01:41.247 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.247 + [[ -n '' ]] 00:01:41.247 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.247 + for M in /var/spdk/build-*-manifest.txt 00:01:41.247 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:41.247 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.247 + for M in /var/spdk/build-*-manifest.txt 00:01:41.247 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.247 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.247 + for M in /var/spdk/build-*-manifest.txt 00:01:41.247 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.247 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.247 ++ uname 00:01:41.247 + [[ Linux == \L\i\n\u\x ]] 00:01:41.247 + sudo dmesg -T 00:01:41.247 + sudo dmesg --clear 00:01:41.247 + dmesg_pid=1634292 00:01:41.247 + [[ Fedora Linux == FreeBSD ]] 00:01:41.247 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.247 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.247 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.247 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:41.247 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:41.247 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.247 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.247 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.247 + sudo dmesg -Tw 00:01:41.247 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.247 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.247 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.247 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.247 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.247 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.247 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.247 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.247 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.509 16:44:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:41.509 16:44:33 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:41.509 16:44:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:41.509 16:44:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:41.509 16:44:33 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.509 16:44:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:41.509 16:44:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:41.509 16:44:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:41.509 16:44:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.509 16:44:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.509 16:44:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.509 16:44:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.509 16:44:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.509 16:44:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.509 16:44:33 -- paths/export.sh@5 -- $ export PATH 00:01:41.510 16:44:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.510 16:44:33 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:41.510 16:44:33 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:41.510 16:44:33 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732117473.XXXXXX 00:01:41.510 16:44:33 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732117473.bIso34 00:01:41.510 16:44:33 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:41.510 16:44:33 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:41.510 16:44:33 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:41.510 16:44:33 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.510 16:44:33 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.510 16:44:33 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:41.510 16:44:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:41.510 16:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.510 16:44:33 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:41.510 16:44:33 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:41.510 16:44:33 -- pm/common@17 -- $ local monitor 00:01:41.510 16:44:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.510 16:44:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.510 16:44:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.510 16:44:33 -- pm/common@21 -- $ date +%s 00:01:41.510 16:44:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.510 16:44:33 -- pm/common@21 -- $ date +%s 00:01:41.510 16:44:33 -- pm/common@25 -- $ sleep 1 00:01:41.510 16:44:33 -- pm/common@21 -- $ date +%s 00:01:41.510 16:44:33 -- pm/common@21 -- $ date +%s 00:01:41.510 16:44:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732117473 00:01:41.510 16:44:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732117473 00:01:41.510 16:44:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732117473 00:01:41.510 16:44:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732117473 00:01:41.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732117473_collect-vmstat.pm.log 00:01:41.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732117473_collect-cpu-load.pm.log 00:01:41.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732117473_collect-cpu-temp.pm.log 00:01:41.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732117473_collect-bmc-pm.bmc.pm.log 00:01:42.453 16:44:34 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:42.453 16:44:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.453 16:44:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.453 16:44:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.453 16:44:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.453 Wed Nov 20 03:44:34 PM UTC 2024 00:01:42.453 16:44:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.453 v25.01-pre-245-g325a79ea3 00:01:42.453 16:44:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:42.453 16:44:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.453 16:44:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.453 16:44:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.453 16:44:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.453 16:44:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.714 ************************************ 00:01:42.714 START TEST ubsan 00:01:42.714 ************************************ 00:01:42.714 16:44:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:42.714 using ubsan 00:01:42.714 00:01:42.714 real 0m0.001s 00:01:42.714 user 0m0.001s 00:01:42.714 sys 0m0.000s 00:01:42.714 16:44:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.714 16:44:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.714 ************************************ 00:01:42.714 END TEST ubsan 00:01:42.714 ************************************ 00:01:42.714 16:44:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:42.714 16:44:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.714 16:44:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.714 16:44:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.714 16:44:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.714 16:44:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.714 16:44:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.714 16:44:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.715 16:44:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:42.715 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:42.715 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.287 Using 'verbs' RDMA provider 00:01:59.156 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:11.395 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:11.965 Creating mk/config.mk...done. 00:02:11.965 Creating mk/cc.flags.mk...done. 00:02:11.965 Type 'make' to build. 00:02:11.965 16:45:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:11.965 16:45:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.965 16:45:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.965 16:45:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.965 ************************************ 00:02:11.965 START TEST make 00:02:11.965 ************************************ 00:02:11.965 16:45:04 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:12.536 make[1]: Nothing to be done for 'all'. 00:02:13.916 The Meson build system 00:02:13.916 Version: 1.5.0 00:02:13.916 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:13.916 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:13.916 Build type: native build 00:02:13.916 Project name: libvfio-user 00:02:13.916 Project version: 0.0.1 00:02:13.916 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.916 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.916 Host machine cpu family: x86_64 00:02:13.916 Host machine cpu: x86_64 00:02:13.916 Run-time dependency threads found: YES 00:02:13.916 Library dl found: YES 00:02:13.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.916 Run-time dependency json-c found: YES 0.17 00:02:13.916 Run-time dependency cmocka found: YES 1.1.7 00:02:13.916 Program pytest-3 found: NO 00:02:13.916 Program flake8 found: NO 00:02:13.916 Program misspell-fixer found: NO 00:02:13.916 Program restructuredtext-lint found: NO 00:02:13.916 Program valgrind found: YES (/usr/bin/valgrind) 00:02:13.916 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.916 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.916 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.916 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:13.916 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:13.916 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:13.916 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:13.916 Build targets in project: 8 00:02:13.916 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:13.916 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:13.916 00:02:13.916 libvfio-user 0.0.1 00:02:13.916 00:02:13.916 User defined options 00:02:13.916 buildtype : debug 00:02:13.916 default_library: shared 00:02:13.916 libdir : /usr/local/lib 00:02:13.916 00:02:13.916 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.487 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:14.487 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:14.487 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:14.487 [3/37] Compiling C object samples/null.p/null.c.o 00:02:14.487 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:14.487 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:14.487 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:14.487 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:14.487 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:14.487 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:14.487 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:14.487 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:14.487 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:14.487 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:14.487 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:14.488 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:14.488 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:14.488 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:14.488 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:14.488 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:14.488 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:14.488 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:14.488 [22/37] Compiling C object samples/server.p/server.c.o 00:02:14.488 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:14.488 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:14.488 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:14.488 [26/37] Compiling C object samples/client.p/client.c.o 00:02:14.488 [27/37] Linking target samples/client 00:02:14.488 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:14.750 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:14.750 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:14.750 [31/37] Linking target test/unit_tests 00:02:14.750 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:14.750 [33/37] Linking target samples/gpio-pci-idio-16 00:02:14.750 [34/37] Linking target samples/server 00:02:14.750 [35/37] Linking target samples/lspci 00:02:14.750 [36/37] Linking target samples/null 00:02:14.750 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:14.750 INFO: autodetecting backend as ninja 00:02:14.750 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.011 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.273 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:15.273 ninja: no work to do. 00:02:21.865 The Meson build system 00:02:21.865 Version: 1.5.0 00:02:21.865 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:21.865 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:21.865 Build type: native build 00:02:21.865 Program cat found: YES (/usr/bin/cat) 00:02:21.865 Project name: DPDK 00:02:21.865 Project version: 24.03.0 00:02:21.865 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.865 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.865 Host machine cpu family: x86_64 00:02:21.865 Host machine cpu: x86_64 00:02:21.865 Message: ## Building in Developer Mode ## 00:02:21.865 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.865 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.865 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.865 Program python3 found: YES (/usr/bin/python3) 00:02:21.865 Program cat found: YES (/usr/bin/cat) 00:02:21.865 Compiler for C supports arguments -march=native: YES 00:02:21.865 Checking for size of "void *" : 8 00:02:21.865 Checking for size of "void *" : 8 (cached) 00:02:21.865 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.865 Library m found: YES 00:02:21.865 Library numa found: YES 00:02:21.865 Has header "numaif.h" : YES 00:02:21.865 Library fdt found: NO 00:02:21.865 Library execinfo found: NO 00:02:21.865 Has header "execinfo.h" : YES 00:02:21.865 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.865 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.865 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.865 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.865 Run-time dependency openssl found: YES 3.1.1 00:02:21.865 Run-time dependency libpcap found: YES 1.10.4 00:02:21.865 Has header "pcap.h" with dependency libpcap: YES 00:02:21.865 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.865 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.865 Compiler for C supports arguments -Wformat: YES 00:02:21.865 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.865 Compiler for C supports arguments -Wformat-security: NO 00:02:21.865 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.865 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.865 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.865 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.865 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.865 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.865 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.865 Compiler for C supports arguments -Wundef: YES 00:02:21.865 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.865 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.865 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.865 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.865 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.865 Program objdump found: YES (/usr/bin/objdump) 00:02:21.865 Compiler for C supports arguments -mavx512f: YES 00:02:21.865 Checking if "AVX512 checking" compiles: YES 00:02:21.865 Fetching value of define "__SSE4_2__" : 1 00:02:21.865 Fetching value of define "__AES__" : 1 00:02:21.865 Fetching value of define "__AVX__" : 1 00:02:21.865 Fetching value of define "__AVX2__" : 1 00:02:21.865 Fetching value of define "__AVX512BW__" : 1 00:02:21.865 Fetching value of define "__AVX512CD__" : 1 00:02:21.865 Fetching value of define "__AVX512DQ__" : 1 00:02:21.865 Fetching value of define "__AVX512F__" : 1 00:02:21.865 Fetching value of define "__AVX512VL__" : 1 00:02:21.865 Fetching value of define "__PCLMUL__" : 1 00:02:21.865 Fetching value of define "__RDRND__" : 1 00:02:21.865 Fetching value of define "__RDSEED__" : 1 00:02:21.865 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:21.865 Fetching value of define "__znver1__" : (undefined) 00:02:21.865 Fetching value of define "__znver2__" : (undefined) 00:02:21.865 Fetching value of define "__znver3__" : (undefined) 00:02:21.865 Fetching value of define "__znver4__" : (undefined) 00:02:21.865 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.865 Message: lib/log: Defining dependency "log" 00:02:21.865 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.865 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.865 Checking for function "getentropy" : NO 00:02:21.865 Message: lib/eal: Defining dependency "eal" 00:02:21.865 Message: lib/ring: Defining dependency "ring" 00:02:21.865 Message: lib/rcu: Defining dependency "rcu" 00:02:21.865 Message: lib/mempool: Defining dependency "mempool" 00:02:21.865 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.865 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.865 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.865 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.865 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.865 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.865 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:21.865 Compiler for C supports arguments -mpclmul: YES 00:02:21.865 Compiler for C supports arguments -maes: YES 00:02:21.865 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.865 Compiler for C supports arguments -mavx512bw: YES 00:02:21.865 Compiler for C supports arguments -mavx512dq: YES 00:02:21.865 Compiler for C supports arguments -mavx512vl: YES 00:02:21.865 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.865 Compiler for C supports arguments -mavx2: YES 00:02:21.865 Compiler for C supports arguments -mavx: YES 00:02:21.865 Message: lib/net: Defining dependency "net" 00:02:21.865 Message: lib/meter: Defining dependency "meter" 00:02:21.865 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.865 Message: lib/pci: Defining dependency "pci" 00:02:21.865 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.865 Message: lib/hash: Defining dependency "hash" 00:02:21.865 Message: lib/timer: Defining dependency "timer" 00:02:21.865 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.865 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.865 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.865 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.865 Message: lib/power: Defining dependency "power" 00:02:21.865 Message: lib/reorder: Defining dependency "reorder" 00:02:21.865 Message: lib/security: Defining dependency "security" 00:02:21.865 Has header "linux/userfaultfd.h" : YES 00:02:21.866 Has header "linux/vduse.h" : YES 00:02:21.866 Message: lib/vhost: Defining dependency "vhost" 00:02:21.866 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.866 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.866 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.866 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.866 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.866 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.866 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.866 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.866 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.866 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.866 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.866 Configuring doxy-api-html.conf using configuration 00:02:21.866 Configuring doxy-api-man.conf using configuration 00:02:21.866 Program mandb found: YES (/usr/bin/mandb) 00:02:21.866 Program sphinx-build found: NO 00:02:21.866 Configuring rte_build_config.h using configuration 00:02:21.866 Message: 00:02:21.866 ================= 00:02:21.866 Applications Enabled 00:02:21.866 ================= 00:02:21.866 00:02:21.866 apps: 00:02:21.866 00:02:21.866 00:02:21.866 Message: 00:02:21.866 ================= 00:02:21.866 Libraries Enabled 00:02:21.866 ================= 00:02:21.866 00:02:21.866 libs: 00:02:21.866 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.866 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.866 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.866 00:02:21.866 Message: 00:02:21.866 =============== 00:02:21.866 Drivers Enabled 00:02:21.866 =============== 00:02:21.866 00:02:21.866 common: 00:02:21.866 00:02:21.866 bus: 00:02:21.866 pci, vdev, 00:02:21.866 mempool: 00:02:21.866 ring, 00:02:21.866 dma: 00:02:21.866 00:02:21.866 net: 00:02:21.866 00:02:21.866 crypto: 00:02:21.866 00:02:21.866 compress: 00:02:21.866 00:02:21.866 vdpa: 00:02:21.866 00:02:21.866 00:02:21.866 Message: 00:02:21.866 ================= 00:02:21.866 Content Skipped 00:02:21.866 ================= 00:02:21.866 00:02:21.866 apps: 00:02:21.866 dumpcap: explicitly disabled via build config 00:02:21.866 graph: explicitly disabled via build config 00:02:21.866 pdump: explicitly disabled via build config 00:02:21.866 proc-info: explicitly disabled via build config 00:02:21.866 test-acl: explicitly disabled via build config 00:02:21.866 test-bbdev: explicitly disabled via build config 00:02:21.866 test-cmdline: explicitly disabled via build config 00:02:21.866 test-compress-perf: explicitly disabled via build config 00:02:21.866 test-crypto-perf: explicitly disabled via build config 00:02:21.866 test-dma-perf: explicitly disabled via build config 00:02:21.866 test-eventdev: explicitly disabled via build config 00:02:21.866 test-fib: explicitly disabled via build config 00:02:21.866 test-flow-perf: explicitly disabled via build config 00:02:21.866 test-gpudev: explicitly disabled via build config 00:02:21.866 test-mldev: explicitly disabled via build config 00:02:21.866 test-pipeline: explicitly disabled via build config 00:02:21.866 test-pmd: explicitly disabled via build config 00:02:21.866 test-regex: explicitly disabled via build config 00:02:21.866 test-sad: explicitly disabled via build config 00:02:21.866 test-security-perf: explicitly disabled via build config 00:02:21.866 00:02:21.866 libs: 00:02:21.866 argparse: explicitly disabled via build config 00:02:21.866 metrics: explicitly disabled via build config 00:02:21.866 acl: explicitly disabled via build config 00:02:21.866 bbdev: explicitly disabled via build config 00:02:21.866 bitratestats: explicitly disabled via build config 00:02:21.866 bpf: explicitly disabled via build config 00:02:21.866 cfgfile: explicitly disabled via build config 00:02:21.866 distributor: explicitly disabled via build config 00:02:21.866 efd: explicitly disabled via build config 00:02:21.866 eventdev: explicitly disabled via build config 00:02:21.866 dispatcher: explicitly disabled via build config 00:02:21.866 gpudev: explicitly disabled via build config 00:02:21.866 gro: explicitly disabled via build config 00:02:21.866 gso: explicitly disabled via build config 00:02:21.866 ip_frag: explicitly disabled via build config 00:02:21.866 jobstats: explicitly disabled via build config 00:02:21.866 latencystats: explicitly disabled via build config 00:02:21.866 lpm: explicitly disabled via build config 00:02:21.866 member: explicitly disabled via build config 00:02:21.866 pcapng: explicitly disabled via build config 00:02:21.866 rawdev: explicitly disabled via build config 00:02:21.866 regexdev: explicitly disabled via build config 00:02:21.866 mldev: explicitly disabled via build config 00:02:21.866 rib: explicitly disabled via build config 00:02:21.866 sched: explicitly disabled via build config 00:02:21.866 stack: explicitly disabled via build config 00:02:21.866 ipsec: explicitly disabled via build config 00:02:21.866 pdcp: explicitly disabled via build config 00:02:21.866 fib: explicitly disabled via build config 00:02:21.866 port: explicitly disabled via build config 00:02:21.866 pdump: explicitly disabled via build config 00:02:21.866 table: explicitly disabled via build config 00:02:21.866 pipeline: explicitly disabled via build config 00:02:21.866 graph: explicitly disabled via build config 00:02:21.866 node: explicitly disabled via build config 00:02:21.866 00:02:21.866 drivers: 00:02:21.866 common/cpt: not in enabled drivers build config 00:02:21.866 common/dpaax: not in enabled drivers build config 00:02:21.866 common/iavf: not in enabled drivers build config 00:02:21.866 common/idpf: not in enabled drivers build config 00:02:21.866 common/ionic: not in enabled drivers build config 00:02:21.866 common/mvep: not in enabled drivers build config 00:02:21.866 common/octeontx: not in enabled drivers build config 00:02:21.866 bus/auxiliary: not in enabled drivers build config 00:02:21.866 bus/cdx: not in enabled drivers build config 00:02:21.866 bus/dpaa: not in enabled drivers build config 00:02:21.866 bus/fslmc: not in enabled drivers build config 00:02:21.866 bus/ifpga: not in enabled drivers build config 00:02:21.866 bus/platform: not in enabled drivers build config 00:02:21.866 bus/uacce: not in enabled drivers build config 00:02:21.866 bus/vmbus: not in enabled drivers build config 00:02:21.866 common/cnxk: not in enabled drivers build config 00:02:21.866 common/mlx5: not in enabled drivers build config 00:02:21.866 common/nfp: not in enabled drivers build config 00:02:21.866 common/nitrox: not in enabled drivers build config 00:02:21.866 common/qat: not in enabled drivers build config 00:02:21.866 common/sfc_efx: not in enabled drivers build config 00:02:21.866 mempool/bucket: not in enabled drivers build config 00:02:21.866 mempool/cnxk: not in enabled drivers build config 00:02:21.866 mempool/dpaa: not in enabled drivers build config 00:02:21.866 mempool/dpaa2: not in enabled drivers build config 00:02:21.866 mempool/octeontx: not in enabled drivers build config 00:02:21.866 mempool/stack: not in enabled drivers build config 00:02:21.866 dma/cnxk: not in enabled drivers build config 00:02:21.866 dma/dpaa: not in enabled drivers build config 00:02:21.866 dma/dpaa2: not in enabled drivers build config 00:02:21.866 dma/hisilicon: not in enabled drivers build config 00:02:21.866 dma/idxd: not in enabled drivers build config 00:02:21.866 dma/ioat: not in enabled drivers build config 00:02:21.866 dma/skeleton: not in enabled drivers build config 00:02:21.866 net/af_packet: not in enabled drivers build config 00:02:21.866 net/af_xdp: not in enabled drivers build config 00:02:21.866 net/ark: not in enabled drivers build config 00:02:21.866 net/atlantic: not in enabled drivers build config 00:02:21.866 net/avp: not in enabled drivers build config 00:02:21.866 net/axgbe: not in enabled drivers build config 00:02:21.866 net/bnx2x: not in enabled drivers build config 00:02:21.866 net/bnxt: not in enabled drivers build config 00:02:21.866 net/bonding: not in enabled drivers build config 00:02:21.866 net/cnxk: not in enabled drivers build config 00:02:21.866 net/cpfl: not in enabled drivers build config 00:02:21.866 net/cxgbe: not in enabled drivers build config 00:02:21.866 net/dpaa: not in enabled drivers build config 00:02:21.866 net/dpaa2: not in enabled drivers build config 00:02:21.866 net/e1000: not in enabled drivers build config 00:02:21.866 net/ena: not in enabled drivers build config 00:02:21.866 net/enetc: not in enabled drivers build config 00:02:21.866 net/enetfec: not in enabled drivers build config 00:02:21.866 net/enic: not in enabled drivers build config 00:02:21.866 net/failsafe: not in enabled drivers build config 00:02:21.866 net/fm10k: not in enabled drivers build config 00:02:21.866 net/gve: not in enabled drivers build config 00:02:21.866 net/hinic: not in enabled drivers build config 00:02:21.866 net/hns3: not in enabled drivers build config 00:02:21.866 net/i40e: not in enabled drivers build config 00:02:21.866 net/iavf: not in enabled drivers build config 00:02:21.866 net/ice: not in enabled drivers build config 00:02:21.866 net/idpf: not in enabled drivers build config 00:02:21.866 net/igc: not in enabled drivers build config 00:02:21.866 net/ionic: not in enabled drivers build config 00:02:21.866 net/ipn3ke: not in enabled drivers build config 00:02:21.866 net/ixgbe: not in enabled drivers build config 00:02:21.866 net/mana: not in enabled drivers build config 00:02:21.866 net/memif: not in enabled drivers build config 00:02:21.866 net/mlx4: not in enabled drivers build config 00:02:21.866 net/mlx5: not in enabled drivers build config 00:02:21.866 net/mvneta: not in enabled drivers build config 00:02:21.866 net/mvpp2: not in enabled drivers build config 00:02:21.866 net/netvsc: not in enabled drivers build config 00:02:21.866 net/nfb: not in enabled drivers build config 00:02:21.866 net/nfp: not in enabled drivers build config 00:02:21.866 net/ngbe: not in enabled drivers build config 00:02:21.866 net/null: not in enabled drivers build config 00:02:21.866 net/octeontx: not in enabled drivers build config 00:02:21.866 net/octeon_ep: not in enabled drivers build config 00:02:21.866 net/pcap: not in enabled drivers build config 00:02:21.867 net/pfe: not in enabled drivers build config 00:02:21.867 net/qede: not in enabled drivers build config 00:02:21.867 net/ring: not in enabled drivers build config 00:02:21.867 net/sfc: not in enabled drivers build config 00:02:21.867 net/softnic: not in enabled drivers build config 00:02:21.867 net/tap: not in enabled drivers build config 00:02:21.867 net/thunderx: not in enabled drivers build config 00:02:21.867 net/txgbe: not in enabled drivers build config 00:02:21.867 net/vdev_netvsc: not in enabled drivers build config 00:02:21.867 net/vhost: not in enabled drivers build config 00:02:21.867 net/virtio: not in enabled drivers build config 00:02:21.867 net/vmxnet3: not in enabled drivers build config 00:02:21.867 raw/*: missing internal dependency, "rawdev" 00:02:21.867 crypto/armv8: not in enabled drivers build config 00:02:21.867 crypto/bcmfs: not in enabled drivers build config 00:02:21.867 crypto/caam_jr: not in enabled drivers build config 00:02:21.867 crypto/ccp: not in enabled drivers build config 00:02:21.867 crypto/cnxk: not in enabled drivers build config 00:02:21.867 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.867 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.867 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.867 crypto/mlx5: not in enabled drivers build config 00:02:21.867 crypto/mvsam: not in enabled drivers build config 00:02:21.867 crypto/nitrox: not in enabled drivers build config 00:02:21.867 crypto/null: not in enabled drivers build config 00:02:21.867 crypto/octeontx: not in enabled drivers build config 00:02:21.867 crypto/openssl: not in enabled drivers build config 00:02:21.867 crypto/scheduler: not in enabled drivers build config 00:02:21.867 crypto/uadk: not in enabled drivers build config 00:02:21.867 crypto/virtio: not in enabled drivers build config 00:02:21.867 compress/isal: not in enabled drivers build config 00:02:21.867 compress/mlx5: not in enabled drivers build config 00:02:21.867 compress/nitrox: not in enabled drivers build config 00:02:21.867 compress/octeontx: not in enabled drivers build config 00:02:21.867 compress/zlib: not in enabled drivers build config 00:02:21.867 regex/*: missing internal dependency, "regexdev" 00:02:21.867 ml/*: missing internal dependency, "mldev" 00:02:21.867 vdpa/ifc: not in enabled drivers build config 00:02:21.867 vdpa/mlx5: not in enabled drivers build config 00:02:21.867 vdpa/nfp: not in enabled drivers build config 00:02:21.867 vdpa/sfc: not in enabled drivers build config 00:02:21.867 event/*: missing internal dependency, "eventdev" 00:02:21.867 baseband/*: missing internal dependency, "bbdev" 00:02:21.867 gpu/*: missing internal dependency, "gpudev" 00:02:21.867 00:02:21.867 00:02:21.867 Build targets in project: 84 00:02:21.867 00:02:21.867 DPDK 24.03.0 00:02:21.867 00:02:21.867 User defined options 00:02:21.867 buildtype : debug 00:02:21.867 default_library : shared 00:02:21.867 libdir : lib 00:02:21.867 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:21.867 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.867 c_link_args : 00:02:21.867 cpu_instruction_set: native 00:02:21.867 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:21.867 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:21.867 enable_docs : false 00:02:21.867 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:21.867 enable_kmods : false 00:02:21.867 max_lcores : 128 00:02:21.867 tests : false 00:02:21.867 00:02:21.867 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.867 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:21.867 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.867 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.867 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.867 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.867 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.867 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.867 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.867 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.867 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.867 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.867 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.867 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.867 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.867 [14/267] Linking static target lib/librte_kvargs.a 00:02:21.867 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.867 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.867 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.867 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.867 [19/267] Linking static target lib/librte_log.a 00:02:21.867 [20/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.867 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.867 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.867 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.867 [24/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.867 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.867 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.129 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.129 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.129 [29/267] Linking static target lib/librte_pci.a 00:02:22.129 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.129 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.129 [32/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.129 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.129 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.129 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.129 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.129 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.129 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.130 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.130 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.389 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.389 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.389 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.389 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.389 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.389 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.389 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:22.389 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:22.389 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.389 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.389 [51/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.389 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.389 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.389 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.389 [55/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.389 [56/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.389 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.389 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:22.389 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:22.389 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.389 [61/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.389 [62/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.389 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.389 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.389 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.389 [66/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.389 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.389 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.389 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.389 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.389 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:22.389 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.389 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.389 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:22.389 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.389 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.389 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:22.389 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.389 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.389 [80/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.389 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:22.389 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.389 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.389 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.389 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.389 [86/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.389 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.390 [88/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.390 [89/267] Linking static target lib/librte_meter.a 00:02:22.390 [90/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.390 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.390 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.390 [93/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.390 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:22.390 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.390 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.390 [97/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.390 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.390 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.390 [100/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.390 [101/267] Linking static target lib/librte_telemetry.a 00:02:22.390 [102/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.390 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.390 [104/267] Linking static target lib/librte_timer.a 00:02:22.390 [105/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.390 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:22.390 [107/267] Linking static target lib/librte_ring.a 00:02:22.390 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.390 [109/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.390 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.390 [111/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.390 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.390 [113/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.390 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:22.390 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.390 [116/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.390 [117/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.390 [118/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.390 [119/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.390 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.390 [121/267] Linking static target lib/librte_cmdline.a 00:02:22.390 [122/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.390 [123/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.390 [124/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.390 [125/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.390 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.390 [127/267] Linking static target lib/librte_reorder.a 00:02:22.390 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:22.390 [129/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.390 [130/267] Linking static target lib/librte_net.a 00:02:22.390 [131/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.390 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.390 [133/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.390 [134/267] Linking static target lib/librte_compressdev.a 00:02:22.390 [135/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.390 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.390 [137/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.390 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.390 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.390 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.390 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.651 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.651 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.651 [144/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.651 [145/267] Linking target lib/librte_log.so.24.1 00:02:22.651 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.651 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.651 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.651 [149/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.651 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.651 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.651 [152/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.651 [153/267] Linking static target lib/librte_mempool.a 00:02:22.651 [154/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.651 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.651 [156/267] Linking static target lib/librte_dmadev.a 00:02:22.651 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.651 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.651 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.651 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.651 [161/267] Linking static target lib/librte_power.a 00:02:22.651 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.651 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.651 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.651 [165/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.651 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.651 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.651 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.651 [169/267] Linking static target lib/librte_rcu.a 00:02:22.651 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.651 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.651 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.651 [173/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.651 [174/267] Linking static target lib/librte_security.a 00:02:22.651 [175/267] Linking static target lib/librte_eal.a 00:02:22.651 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.651 [177/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.651 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.651 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.651 [180/267] Linking static target lib/librte_mbuf.a 00:02:22.651 [181/267] Linking target lib/librte_kvargs.so.24.1 00:02:22.651 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.651 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.651 [184/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.651 [185/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.651 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.651 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.651 [188/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.651 [189/267] Linking static target drivers/librte_bus_vdev.a 00:02:22.912 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.912 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.912 [192/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.912 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.912 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.912 [195/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.912 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.912 [197/267] Linking static target drivers/librte_bus_pci.a 00:02:22.912 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.912 [199/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.912 [200/267] Linking static target lib/librte_hash.a 00:02:22.912 [201/267] Linking static target lib/librte_cryptodev.a 00:02:22.912 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.912 [203/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.912 [204/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.912 [205/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.912 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.912 [207/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.912 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.912 [209/267] Linking static target drivers/librte_mempool_ring.a 00:02:22.912 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.173 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.173 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:23.173 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.173 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:23.435 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.435 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.435 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.435 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.435 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.435 [220/267] Linking static target lib/librte_ethdev.a 00:02:23.696 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.696 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.696 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.696 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.957 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.957 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.528 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.528 [228/267] Linking static target lib/librte_vhost.a 00:02:25.156 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.538 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.124 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.068 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.068 [233/267] Linking target lib/librte_eal.so.24.1 00:02:34.328 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.328 [235/267] Linking target lib/librte_ring.so.24.1 00:02:34.328 [236/267] Linking target lib/librte_timer.so.24.1 00:02:34.328 [237/267] Linking target lib/librte_meter.so.24.1 00:02:34.328 [238/267] Linking target lib/librte_pci.so.24.1 00:02:34.328 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.328 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:34.589 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.589 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.589 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.589 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.589 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.589 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:34.589 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:34.589 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.589 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:34.589 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:34.589 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:34.589 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:34.849 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:34.849 [254/267] Linking target lib/librte_net.so.24.1 00:02:34.849 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:34.849 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:34.849 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:35.111 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.111 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.111 [260/267] Linking target lib/librte_hash.so.24.1 00:02:35.111 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:35.111 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:35.111 [263/267] Linking target lib/librte_security.so.24.1 00:02:35.111 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.111 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.371 [266/267] Linking target lib/librte_power.so.24.1 00:02:35.371 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:35.371 INFO: autodetecting backend as ninja 00:02:35.371 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:38.671 CC lib/ut/ut.o 00:02:38.671 CC lib/ut_mock/mock.o 00:02:38.671 CC lib/log/log.o 00:02:38.671 CC lib/log/log_flags.o 00:02:38.671 CC lib/log/log_deprecated.o 00:02:38.671 LIB libspdk_ut.a 00:02:38.671 SO libspdk_ut.so.2.0 00:02:38.671 LIB libspdk_ut_mock.a 00:02:38.671 LIB libspdk_log.a 00:02:38.671 SO libspdk_ut_mock.so.6.0 00:02:38.671 SYMLINK libspdk_ut.so 00:02:38.671 SO libspdk_log.so.7.1 00:02:38.671 SYMLINK libspdk_ut_mock.so 00:02:38.671 SYMLINK libspdk_log.so 00:02:38.931 CC lib/util/base64.o 00:02:38.931 CC lib/util/bit_array.o 00:02:38.931 CC lib/util/cpuset.o 00:02:38.931 CC lib/util/crc16.o 00:02:38.931 CC lib/dma/dma.o 00:02:38.931 CC lib/util/crc32.o 00:02:38.931 CC lib/util/crc32c.o 00:02:38.931 CC lib/ioat/ioat.o 00:02:38.931 CXX lib/trace_parser/trace.o 00:02:38.931 CC lib/util/crc32_ieee.o 00:02:38.931 CC lib/util/crc64.o 00:02:38.931 CC lib/util/dif.o 00:02:38.931 CC lib/util/fd.o 00:02:38.931 CC lib/util/fd_group.o 00:02:39.192 CC lib/util/file.o 00:02:39.192 CC lib/util/hexlify.o 00:02:39.192 CC lib/util/iov.o 00:02:39.192 CC lib/util/math.o 00:02:39.192 CC lib/util/net.o 00:02:39.192 CC lib/util/pipe.o 00:02:39.192 CC lib/util/strerror_tls.o 00:02:39.192 CC lib/util/string.o 00:02:39.192 CC lib/util/uuid.o 00:02:39.192 CC lib/util/xor.o 00:02:39.192 CC lib/util/zipf.o 00:02:39.192 CC lib/util/md5.o 00:02:39.192 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.192 CC lib/vfio_user/host/vfio_user.o 00:02:39.192 LIB libspdk_dma.a 00:02:39.192 SO libspdk_dma.so.5.0 00:02:39.454 LIB libspdk_ioat.a 00:02:39.454 SYMLINK libspdk_dma.so 00:02:39.454 SO libspdk_ioat.so.7.0 00:02:39.454 SYMLINK libspdk_ioat.so 00:02:39.454 LIB libspdk_vfio_user.a 00:02:39.454 SO libspdk_vfio_user.so.5.0 00:02:39.716 LIB libspdk_util.a 00:02:39.716 SYMLINK libspdk_vfio_user.so 00:02:39.716 SO libspdk_util.so.10.1 00:02:39.716 SYMLINK libspdk_util.so 00:02:39.978 LIB libspdk_trace_parser.a 00:02:39.978 SO libspdk_trace_parser.so.6.0 00:02:39.978 SYMLINK libspdk_trace_parser.so 00:02:40.238 CC lib/vmd/vmd.o 00:02:40.238 CC lib/conf/conf.o 00:02:40.238 CC lib/vmd/led.o 00:02:40.238 CC lib/idxd/idxd.o 00:02:40.238 CC lib/json/json_parse.o 00:02:40.238 CC lib/rdma_utils/rdma_utils.o 00:02:40.238 CC lib/json/json_util.o 00:02:40.238 CC lib/idxd/idxd_user.o 00:02:40.238 CC lib/idxd/idxd_kernel.o 00:02:40.238 CC lib/json/json_write.o 00:02:40.238 CC lib/env_dpdk/env.o 00:02:40.238 CC lib/env_dpdk/memory.o 00:02:40.238 CC lib/env_dpdk/pci.o 00:02:40.238 CC lib/env_dpdk/init.o 00:02:40.238 CC lib/env_dpdk/threads.o 00:02:40.238 CC lib/env_dpdk/pci_ioat.o 00:02:40.238 CC lib/env_dpdk/pci_virtio.o 00:02:40.238 CC lib/env_dpdk/pci_vmd.o 00:02:40.238 CC lib/env_dpdk/pci_idxd.o 00:02:40.238 CC lib/env_dpdk/pci_event.o 00:02:40.238 CC lib/env_dpdk/sigbus_handler.o 00:02:40.238 CC lib/env_dpdk/pci_dpdk.o 00:02:40.238 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:40.238 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.499 LIB libspdk_conf.a 00:02:40.499 SO libspdk_conf.so.6.0 00:02:40.499 LIB libspdk_rdma_utils.a 00:02:40.499 SYMLINK libspdk_conf.so 00:02:40.499 LIB libspdk_json.a 00:02:40.499 SO libspdk_rdma_utils.so.1.0 00:02:40.499 SO libspdk_json.so.6.0 00:02:40.499 SYMLINK libspdk_rdma_utils.so 00:02:40.499 SYMLINK libspdk_json.so 00:02:40.761 LIB libspdk_idxd.a 00:02:40.761 LIB libspdk_vmd.a 00:02:40.761 SO libspdk_idxd.so.12.1 00:02:40.761 SO libspdk_vmd.so.6.0 00:02:40.761 SYMLINK libspdk_idxd.so 00:02:40.761 SYMLINK libspdk_vmd.so 00:02:41.023 CC lib/rdma_provider/common.o 00:02:41.023 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:41.023 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.023 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.023 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.023 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.284 LIB libspdk_rdma_provider.a 00:02:41.284 LIB libspdk_jsonrpc.a 00:02:41.284 SO libspdk_rdma_provider.so.7.0 00:02:41.284 SO libspdk_jsonrpc.so.6.0 00:02:41.284 SYMLINK libspdk_rdma_provider.so 00:02:41.284 SYMLINK libspdk_jsonrpc.so 00:02:41.544 LIB libspdk_env_dpdk.a 00:02:41.544 SO libspdk_env_dpdk.so.15.1 00:02:41.544 SYMLINK libspdk_env_dpdk.so 00:02:41.804 CC lib/rpc/rpc.o 00:02:41.804 LIB libspdk_rpc.a 00:02:42.066 SO libspdk_rpc.so.6.0 00:02:42.066 SYMLINK libspdk_rpc.so 00:02:42.327 CC lib/trace/trace.o 00:02:42.327 CC lib/trace/trace_flags.o 00:02:42.327 CC lib/trace/trace_rpc.o 00:02:42.327 CC lib/notify/notify.o 00:02:42.327 CC lib/notify/notify_rpc.o 00:02:42.327 CC lib/keyring/keyring.o 00:02:42.327 CC lib/keyring/keyring_rpc.o 00:02:42.589 LIB libspdk_notify.a 00:02:42.589 SO libspdk_notify.so.6.0 00:02:42.589 LIB libspdk_keyring.a 00:02:42.589 LIB libspdk_trace.a 00:02:42.589 SYMLINK libspdk_notify.so 00:02:42.589 SO libspdk_keyring.so.2.0 00:02:42.589 SO libspdk_trace.so.11.0 00:02:42.852 SYMLINK libspdk_keyring.so 00:02:42.852 SYMLINK libspdk_trace.so 00:02:43.113 CC lib/thread/thread.o 00:02:43.113 CC lib/sock/sock.o 00:02:43.113 CC lib/thread/iobuf.o 00:02:43.113 CC lib/sock/sock_rpc.o 00:02:43.374 LIB libspdk_sock.a 00:02:43.635 SO libspdk_sock.so.10.0 00:02:43.635 SYMLINK libspdk_sock.so 00:02:43.895 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:43.895 CC lib/nvme/nvme_ctrlr.o 00:02:43.895 CC lib/nvme/nvme_fabric.o 00:02:43.895 CC lib/nvme/nvme_ns_cmd.o 00:02:43.895 CC lib/nvme/nvme_ns.o 00:02:43.895 CC lib/nvme/nvme_pcie_common.o 00:02:43.895 CC lib/nvme/nvme_pcie.o 00:02:43.895 CC lib/nvme/nvme_qpair.o 00:02:43.895 CC lib/nvme/nvme.o 00:02:43.895 CC lib/nvme/nvme_quirks.o 00:02:43.895 CC lib/nvme/nvme_transport.o 00:02:43.895 CC lib/nvme/nvme_discovery.o 00:02:43.895 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:43.895 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:43.895 CC lib/nvme/nvme_tcp.o 00:02:43.895 CC lib/nvme/nvme_opal.o 00:02:43.895 CC lib/nvme/nvme_io_msg.o 00:02:43.895 CC lib/nvme/nvme_poll_group.o 00:02:43.895 CC lib/nvme/nvme_zns.o 00:02:43.895 CC lib/nvme/nvme_stubs.o 00:02:43.895 CC lib/nvme/nvme_auth.o 00:02:43.895 CC lib/nvme/nvme_cuse.o 00:02:43.895 CC lib/nvme/nvme_vfio_user.o 00:02:43.895 CC lib/nvme/nvme_rdma.o 00:02:44.464 LIB libspdk_thread.a 00:02:44.464 SO libspdk_thread.so.11.0 00:02:44.464 SYMLINK libspdk_thread.so 00:02:44.723 CC lib/vfu_tgt/tgt_endpoint.o 00:02:44.723 CC lib/virtio/virtio.o 00:02:44.723 CC lib/vfu_tgt/tgt_rpc.o 00:02:44.723 CC lib/virtio/virtio_vhost_user.o 00:02:44.724 CC lib/virtio/virtio_vfio_user.o 00:02:44.724 CC lib/virtio/virtio_pci.o 00:02:44.724 CC lib/accel/accel.o 00:02:44.724 CC lib/accel/accel_rpc.o 00:02:44.724 CC lib/accel/accel_sw.o 00:02:44.984 CC lib/blob/blobstore.o 00:02:44.984 CC lib/fsdev/fsdev.o 00:02:44.984 CC lib/blob/request.o 00:02:44.984 CC lib/fsdev/fsdev_io.o 00:02:44.984 CC lib/blob/zeroes.o 00:02:44.984 CC lib/fsdev/fsdev_rpc.o 00:02:44.984 CC lib/blob/blob_bs_dev.o 00:02:44.984 CC lib/init/json_config.o 00:02:44.984 CC lib/init/subsystem.o 00:02:44.984 CC lib/init/subsystem_rpc.o 00:02:44.984 CC lib/init/rpc.o 00:02:45.244 LIB libspdk_init.a 00:02:45.244 SO libspdk_init.so.6.0 00:02:45.244 LIB libspdk_virtio.a 00:02:45.244 LIB libspdk_vfu_tgt.a 00:02:45.244 SO libspdk_virtio.so.7.0 00:02:45.244 SO libspdk_vfu_tgt.so.3.0 00:02:45.244 SYMLINK libspdk_init.so 00:02:45.244 SYMLINK libspdk_vfu_tgt.so 00:02:45.244 SYMLINK libspdk_virtio.so 00:02:45.505 LIB libspdk_fsdev.a 00:02:45.505 SO libspdk_fsdev.so.2.0 00:02:45.505 SYMLINK libspdk_fsdev.so 00:02:45.505 CC lib/event/app.o 00:02:45.505 CC lib/event/reactor.o 00:02:45.505 CC lib/event/log_rpc.o 00:02:45.765 CC lib/event/app_rpc.o 00:02:45.765 CC lib/event/scheduler_static.o 00:02:45.765 LIB libspdk_accel.a 00:02:45.765 SO libspdk_accel.so.16.0 00:02:46.026 LIB libspdk_nvme.a 00:02:46.026 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:46.026 SYMLINK libspdk_accel.so 00:02:46.026 SO libspdk_nvme.so.15.0 00:02:46.026 LIB libspdk_event.a 00:02:46.026 SO libspdk_event.so.14.0 00:02:46.287 SYMLINK libspdk_event.so 00:02:46.287 SYMLINK libspdk_nvme.so 00:02:46.287 CC lib/bdev/bdev.o 00:02:46.287 CC lib/bdev/bdev_rpc.o 00:02:46.287 CC lib/bdev/bdev_zone.o 00:02:46.287 CC lib/bdev/part.o 00:02:46.287 CC lib/bdev/scsi_nvme.o 00:02:46.548 LIB libspdk_fuse_dispatcher.a 00:02:46.548 SO libspdk_fuse_dispatcher.so.1.0 00:02:46.548 SYMLINK libspdk_fuse_dispatcher.so 00:02:47.680 LIB libspdk_blob.a 00:02:47.680 SO libspdk_blob.so.11.0 00:02:47.680 SYMLINK libspdk_blob.so 00:02:47.951 CC lib/lvol/lvol.o 00:02:47.951 CC lib/blobfs/blobfs.o 00:02:47.951 CC lib/blobfs/tree.o 00:02:48.892 LIB libspdk_bdev.a 00:02:48.892 SO libspdk_bdev.so.17.0 00:02:48.892 LIB libspdk_blobfs.a 00:02:48.892 SO libspdk_blobfs.so.10.0 00:02:48.892 SYMLINK libspdk_bdev.so 00:02:48.892 LIB libspdk_lvol.a 00:02:48.892 SYMLINK libspdk_blobfs.so 00:02:48.892 SO libspdk_lvol.so.10.0 00:02:48.893 SYMLINK libspdk_lvol.so 00:02:49.154 CC lib/scsi/dev.o 00:02:49.154 CC lib/scsi/lun.o 00:02:49.154 CC lib/scsi/port.o 00:02:49.154 CC lib/scsi/scsi_bdev.o 00:02:49.154 CC lib/scsi/scsi.o 00:02:49.154 CC lib/scsi/scsi_pr.o 00:02:49.154 CC lib/scsi/scsi_rpc.o 00:02:49.154 CC lib/scsi/task.o 00:02:49.154 CC lib/nvmf/ctrlr.o 00:02:49.154 CC lib/nvmf/ctrlr_discovery.o 00:02:49.154 CC lib/nvmf/ctrlr_bdev.o 00:02:49.154 CC lib/ftl/ftl_core.o 00:02:49.154 CC lib/nvmf/subsystem.o 00:02:49.154 CC lib/ftl/ftl_init.o 00:02:49.154 CC lib/nvmf/nvmf.o 00:02:49.154 CC lib/ftl/ftl_layout.o 00:02:49.154 CC lib/nvmf/nvmf_rpc.o 00:02:49.154 CC lib/ftl/ftl_debug.o 00:02:49.154 CC lib/nvmf/transport.o 00:02:49.154 CC lib/ftl/ftl_io.o 00:02:49.154 CC lib/nvmf/tcp.o 00:02:49.154 CC lib/nvmf/stubs.o 00:02:49.154 CC lib/ftl/ftl_sb.o 00:02:49.154 CC lib/ublk/ublk.o 00:02:49.154 CC lib/nvmf/mdns_server.o 00:02:49.154 CC lib/ublk/ublk_rpc.o 00:02:49.154 CC lib/ftl/ftl_l2p.o 00:02:49.154 CC lib/ftl/ftl_l2p_flat.o 00:02:49.154 CC lib/nvmf/vfio_user.o 00:02:49.154 CC lib/nbd/nbd.o 00:02:49.154 CC lib/nvmf/rdma.o 00:02:49.154 CC lib/ftl/ftl_nv_cache.o 00:02:49.154 CC lib/nbd/nbd_rpc.o 00:02:49.154 CC lib/nvmf/auth.o 00:02:49.154 CC lib/ftl/ftl_band.o 00:02:49.154 CC lib/ftl/ftl_band_ops.o 00:02:49.154 CC lib/ftl/ftl_writer.o 00:02:49.154 CC lib/ftl/ftl_rq.o 00:02:49.154 CC lib/ftl/ftl_reloc.o 00:02:49.154 CC lib/ftl/ftl_l2p_cache.o 00:02:49.154 CC lib/ftl/ftl_p2l.o 00:02:49.154 CC lib/ftl/ftl_p2l_log.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:49.154 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:49.154 CC lib/ftl/utils/ftl_conf.o 00:02:49.154 CC lib/ftl/utils/ftl_md.o 00:02:49.154 CC lib/ftl/utils/ftl_mempool.o 00:02:49.154 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.154 CC lib/ftl/utils/ftl_property.o 00:02:49.154 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.154 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.154 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.154 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.154 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.154 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.154 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:49.154 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.154 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:49.154 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.155 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:49.414 CC lib/ftl/base/ftl_base_dev.o 00:02:49.414 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:49.414 CC lib/ftl/base/ftl_base_bdev.o 00:02:49.414 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:49.414 CC lib/ftl/ftl_trace.o 00:02:49.985 LIB libspdk_nbd.a 00:02:49.985 SO libspdk_nbd.so.7.0 00:02:49.985 LIB libspdk_scsi.a 00:02:49.985 SYMLINK libspdk_nbd.so 00:02:49.985 SO libspdk_scsi.so.9.0 00:02:49.985 SYMLINK libspdk_scsi.so 00:02:50.245 LIB libspdk_ublk.a 00:02:50.245 SO libspdk_ublk.so.3.0 00:02:50.245 SYMLINK libspdk_ublk.so 00:02:50.506 LIB libspdk_ftl.a 00:02:50.506 CC lib/vhost/vhost.o 00:02:50.506 CC lib/iscsi/conn.o 00:02:50.506 CC lib/iscsi/init_grp.o 00:02:50.506 CC lib/vhost/vhost_rpc.o 00:02:50.506 CC lib/iscsi/iscsi.o 00:02:50.506 CC lib/vhost/vhost_scsi.o 00:02:50.506 CC lib/iscsi/param.o 00:02:50.506 CC lib/iscsi/portal_grp.o 00:02:50.506 CC lib/vhost/vhost_blk.o 00:02:50.506 CC lib/iscsi/tgt_node.o 00:02:50.506 CC lib/vhost/rte_vhost_user.o 00:02:50.506 CC lib/iscsi/iscsi_subsystem.o 00:02:50.506 CC lib/iscsi/iscsi_rpc.o 00:02:50.506 CC lib/iscsi/task.o 00:02:50.506 SO libspdk_ftl.so.9.0 00:02:51.078 SYMLINK libspdk_ftl.so 00:02:51.339 LIB libspdk_nvmf.a 00:02:51.339 SO libspdk_nvmf.so.20.0 00:02:51.339 LIB libspdk_vhost.a 00:02:51.600 SO libspdk_vhost.so.8.0 00:02:51.600 SYMLINK libspdk_vhost.so 00:02:51.600 SYMLINK libspdk_nvmf.so 00:02:51.600 LIB libspdk_iscsi.a 00:02:51.861 SO libspdk_iscsi.so.8.0 00:02:51.861 SYMLINK libspdk_iscsi.so 00:02:52.433 CC module/vfu_device/vfu_virtio.o 00:02:52.433 CC module/vfu_device/vfu_virtio_blk.o 00:02:52.433 CC module/vfu_device/vfu_virtio_scsi.o 00:02:52.433 CC module/vfu_device/vfu_virtio_rpc.o 00:02:52.433 CC module/vfu_device/vfu_virtio_fs.o 00:02:52.433 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.694 CC module/blob/bdev/blob_bdev.o 00:02:52.694 CC module/accel/error/accel_error.o 00:02:52.694 CC module/accel/error/accel_error_rpc.o 00:02:52.694 CC module/sock/posix/posix.o 00:02:52.694 LIB libspdk_env_dpdk_rpc.a 00:02:52.694 CC module/accel/dsa/accel_dsa.o 00:02:52.694 CC module/accel/iaa/accel_iaa.o 00:02:52.694 CC module/accel/dsa/accel_dsa_rpc.o 00:02:52.694 CC module/accel/iaa/accel_iaa_rpc.o 00:02:52.694 CC module/accel/ioat/accel_ioat.o 00:02:52.694 CC module/accel/ioat/accel_ioat_rpc.o 00:02:52.694 CC module/fsdev/aio/fsdev_aio.o 00:02:52.694 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.694 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:52.694 CC module/fsdev/aio/linux_aio_mgr.o 00:02:52.694 CC module/keyring/file/keyring.o 00:02:52.694 CC module/keyring/file/keyring_rpc.o 00:02:52.694 CC module/keyring/linux/keyring.o 00:02:52.694 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.694 CC module/keyring/linux/keyring_rpc.o 00:02:52.694 CC module/scheduler/gscheduler/gscheduler.o 00:02:52.694 SO libspdk_env_dpdk_rpc.so.6.0 00:02:52.694 SYMLINK libspdk_env_dpdk_rpc.so 00:02:52.956 LIB libspdk_keyring_linux.a 00:02:52.956 LIB libspdk_keyring_file.a 00:02:52.956 LIB libspdk_scheduler_dpdk_governor.a 00:02:52.956 LIB libspdk_accel_error.a 00:02:52.956 LIB libspdk_scheduler_gscheduler.a 00:02:52.956 LIB libspdk_accel_ioat.a 00:02:52.956 LIB libspdk_scheduler_dynamic.a 00:02:52.956 SO libspdk_keyring_linux.so.1.0 00:02:52.956 LIB libspdk_accel_iaa.a 00:02:52.956 SO libspdk_keyring_file.so.2.0 00:02:52.956 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:52.956 SO libspdk_accel_error.so.2.0 00:02:52.956 SO libspdk_scheduler_gscheduler.so.4.0 00:02:52.956 SO libspdk_accel_ioat.so.6.0 00:02:52.956 LIB libspdk_blob_bdev.a 00:02:52.956 SO libspdk_scheduler_dynamic.so.4.0 00:02:52.956 LIB libspdk_accel_dsa.a 00:02:52.956 SO libspdk_accel_iaa.so.3.0 00:02:52.956 SYMLINK libspdk_keyring_linux.so 00:02:52.956 SO libspdk_blob_bdev.so.11.0 00:02:52.956 SYMLINK libspdk_keyring_file.so 00:02:52.956 SO libspdk_accel_dsa.so.5.0 00:02:52.956 SYMLINK libspdk_accel_ioat.so 00:02:52.956 SYMLINK libspdk_scheduler_gscheduler.so 00:02:52.956 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:52.956 SYMLINK libspdk_accel_error.so 00:02:52.956 SYMLINK libspdk_scheduler_dynamic.so 00:02:52.956 SYMLINK libspdk_accel_iaa.so 00:02:52.956 LIB libspdk_vfu_device.a 00:02:52.956 SYMLINK libspdk_blob_bdev.so 00:02:53.218 SYMLINK libspdk_accel_dsa.so 00:02:53.218 SO libspdk_vfu_device.so.3.0 00:02:53.218 SYMLINK libspdk_vfu_device.so 00:02:53.218 LIB libspdk_fsdev_aio.a 00:02:53.478 SO libspdk_fsdev_aio.so.1.0 00:02:53.478 LIB libspdk_sock_posix.a 00:02:53.478 SO libspdk_sock_posix.so.6.0 00:02:53.478 SYMLINK libspdk_fsdev_aio.so 00:02:53.478 SYMLINK libspdk_sock_posix.so 00:02:53.738 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.738 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.738 CC module/bdev/malloc/bdev_malloc.o 00:02:53.738 CC module/bdev/gpt/gpt.o 00:02:53.739 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.739 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.739 CC module/bdev/error/vbdev_error.o 00:02:53.739 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.739 CC module/bdev/null/bdev_null.o 00:02:53.739 CC module/bdev/split/vbdev_split.o 00:02:53.739 CC module/bdev/null/bdev_null_rpc.o 00:02:53.739 CC module/bdev/delay/vbdev_delay.o 00:02:53.739 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.739 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.739 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.739 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.739 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.739 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.739 CC module/bdev/nvme/bdev_nvme.o 00:02:53.739 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.739 CC module/bdev/raid/bdev_raid.o 00:02:53.739 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.739 CC module/bdev/nvme/nvme_rpc.o 00:02:53.739 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.739 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.739 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.739 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.739 CC module/bdev/nvme/vbdev_opal.o 00:02:53.739 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.739 CC module/bdev/ftl/bdev_ftl.o 00:02:53.739 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.739 CC module/bdev/raid/raid0.o 00:02:53.739 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.739 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.739 CC module/bdev/raid/raid1.o 00:02:53.739 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.739 CC module/bdev/aio/bdev_aio.o 00:02:53.739 CC module/bdev/raid/concat.o 00:02:53.739 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.739 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.739 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.739 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.002 LIB libspdk_blobfs_bdev.a 00:02:54.002 LIB libspdk_bdev_split.a 00:02:54.002 SO libspdk_blobfs_bdev.so.6.0 00:02:54.002 LIB libspdk_bdev_null.a 00:02:54.002 LIB libspdk_bdev_error.a 00:02:54.002 LIB libspdk_bdev_gpt.a 00:02:54.002 SO libspdk_bdev_split.so.6.0 00:02:54.002 SO libspdk_bdev_null.so.6.0 00:02:54.002 SO libspdk_bdev_error.so.6.0 00:02:54.002 SYMLINK libspdk_blobfs_bdev.so 00:02:54.002 SO libspdk_bdev_gpt.so.6.0 00:02:54.002 LIB libspdk_bdev_ftl.a 00:02:54.002 LIB libspdk_bdev_passthru.a 00:02:54.002 SO libspdk_bdev_ftl.so.6.0 00:02:54.002 SYMLINK libspdk_bdev_split.so 00:02:54.002 LIB libspdk_bdev_malloc.a 00:02:54.002 SO libspdk_bdev_passthru.so.6.0 00:02:54.002 LIB libspdk_bdev_delay.a 00:02:54.002 LIB libspdk_bdev_zone_block.a 00:02:54.002 SYMLINK libspdk_bdev_null.so 00:02:54.002 SYMLINK libspdk_bdev_error.so 00:02:54.263 SYMLINK libspdk_bdev_gpt.so 00:02:54.263 LIB libspdk_bdev_aio.a 00:02:54.263 SO libspdk_bdev_malloc.so.6.0 00:02:54.263 LIB libspdk_bdev_iscsi.a 00:02:54.263 SYMLINK libspdk_bdev_ftl.so 00:02:54.263 SO libspdk_bdev_zone_block.so.6.0 00:02:54.263 SO libspdk_bdev_delay.so.6.0 00:02:54.263 SO libspdk_bdev_aio.so.6.0 00:02:54.263 SYMLINK libspdk_bdev_passthru.so 00:02:54.263 SO libspdk_bdev_iscsi.so.6.0 00:02:54.263 SYMLINK libspdk_bdev_malloc.so 00:02:54.263 SYMLINK libspdk_bdev_zone_block.so 00:02:54.263 SYMLINK libspdk_bdev_delay.so 00:02:54.263 SYMLINK libspdk_bdev_aio.so 00:02:54.263 SYMLINK libspdk_bdev_iscsi.so 00:02:54.263 LIB libspdk_bdev_lvol.a 00:02:54.263 LIB libspdk_bdev_virtio.a 00:02:54.263 SO libspdk_bdev_lvol.so.6.0 00:02:54.263 SO libspdk_bdev_virtio.so.6.0 00:02:54.263 SYMLINK libspdk_bdev_lvol.so 00:02:54.525 SYMLINK libspdk_bdev_virtio.so 00:02:54.787 LIB libspdk_bdev_raid.a 00:02:54.787 SO libspdk_bdev_raid.so.6.0 00:02:54.787 SYMLINK libspdk_bdev_raid.so 00:02:56.171 LIB libspdk_bdev_nvme.a 00:02:56.171 SO libspdk_bdev_nvme.so.7.1 00:02:56.171 SYMLINK libspdk_bdev_nvme.so 00:02:57.116 CC module/event/subsystems/vmd/vmd.o 00:02:57.116 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.116 CC module/event/subsystems/keyring/keyring.o 00:02:57.116 CC module/event/subsystems/sock/sock.o 00:02:57.116 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.116 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.116 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.116 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:57.116 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:57.116 CC module/event/subsystems/fsdev/fsdev.o 00:02:57.116 LIB libspdk_event_scheduler.a 00:02:57.116 LIB libspdk_event_sock.a 00:02:57.116 LIB libspdk_event_vhost_blk.a 00:02:57.116 LIB libspdk_event_keyring.a 00:02:57.116 LIB libspdk_event_fsdev.a 00:02:57.116 LIB libspdk_event_iobuf.a 00:02:57.116 LIB libspdk_event_vmd.a 00:02:57.116 LIB libspdk_event_vfu_tgt.a 00:02:57.116 SO libspdk_event_vhost_blk.so.3.0 00:02:57.116 SO libspdk_event_sock.so.5.0 00:02:57.116 SO libspdk_event_scheduler.so.4.0 00:02:57.116 SO libspdk_event_keyring.so.1.0 00:02:57.116 SO libspdk_event_vfu_tgt.so.3.0 00:02:57.116 SO libspdk_event_fsdev.so.1.0 00:02:57.116 SO libspdk_event_iobuf.so.3.0 00:02:57.116 SO libspdk_event_vmd.so.6.0 00:02:57.116 SYMLINK libspdk_event_vhost_blk.so 00:02:57.116 SYMLINK libspdk_event_fsdev.so 00:02:57.116 SYMLINK libspdk_event_vfu_tgt.so 00:02:57.116 SYMLINK libspdk_event_sock.so 00:02:57.116 SYMLINK libspdk_event_keyring.so 00:02:57.116 SYMLINK libspdk_event_scheduler.so 00:02:57.116 SYMLINK libspdk_event_iobuf.so 00:02:57.116 SYMLINK libspdk_event_vmd.so 00:02:57.691 CC module/event/subsystems/accel/accel.o 00:02:57.691 LIB libspdk_event_accel.a 00:02:57.691 SO libspdk_event_accel.so.6.0 00:02:57.952 SYMLINK libspdk_event_accel.so 00:02:58.212 CC module/event/subsystems/bdev/bdev.o 00:02:58.473 LIB libspdk_event_bdev.a 00:02:58.473 SO libspdk_event_bdev.so.6.0 00:02:58.473 SYMLINK libspdk_event_bdev.so 00:02:58.734 CC module/event/subsystems/nbd/nbd.o 00:02:58.734 CC module/event/subsystems/scsi/scsi.o 00:02:58.995 CC module/event/subsystems/ublk/ublk.o 00:02:58.995 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:58.995 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:58.995 LIB libspdk_event_scsi.a 00:02:58.995 LIB libspdk_event_nbd.a 00:02:58.995 LIB libspdk_event_ublk.a 00:02:58.995 SO libspdk_event_nbd.so.6.0 00:02:58.995 SO libspdk_event_scsi.so.6.0 00:02:58.995 SO libspdk_event_ublk.so.3.0 00:02:59.256 LIB libspdk_event_nvmf.a 00:02:59.256 SYMLINK libspdk_event_scsi.so 00:02:59.256 SYMLINK libspdk_event_nbd.so 00:02:59.256 SYMLINK libspdk_event_ublk.so 00:02:59.256 SO libspdk_event_nvmf.so.6.0 00:02:59.256 SYMLINK libspdk_event_nvmf.so 00:02:59.516 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.516 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.778 LIB libspdk_event_vhost_scsi.a 00:02:59.778 LIB libspdk_event_iscsi.a 00:02:59.778 SO libspdk_event_vhost_scsi.so.3.0 00:02:59.778 SO libspdk_event_iscsi.so.6.0 00:02:59.778 SYMLINK libspdk_event_vhost_scsi.so 00:02:59.778 SYMLINK libspdk_event_iscsi.so 00:03:00.040 SO libspdk.so.6.0 00:03:00.041 SYMLINK libspdk.so 00:03:00.302 CXX app/trace/trace.o 00:03:00.302 CC app/trace_record/trace_record.o 00:03:00.302 CC app/spdk_nvme_identify/identify.o 00:03:00.302 CC app/spdk_lspci/spdk_lspci.o 00:03:00.302 CC app/spdk_top/spdk_top.o 00:03:00.302 TEST_HEADER include/spdk/accel.h 00:03:00.302 CC app/spdk_nvme_perf/perf.o 00:03:00.302 TEST_HEADER include/spdk/accel_module.h 00:03:00.302 TEST_HEADER include/spdk/assert.h 00:03:00.302 TEST_HEADER include/spdk/barrier.h 00:03:00.302 TEST_HEADER include/spdk/base64.h 00:03:00.302 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.302 CC test/rpc_client/rpc_client_test.o 00:03:00.302 TEST_HEADER include/spdk/bdev.h 00:03:00.302 TEST_HEADER include/spdk/bdev_module.h 00:03:00.302 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.302 TEST_HEADER include/spdk/bit_pool.h 00:03:00.302 TEST_HEADER include/spdk/bit_array.h 00:03:00.302 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.302 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.302 TEST_HEADER include/spdk/blobfs.h 00:03:00.302 TEST_HEADER include/spdk/conf.h 00:03:00.302 TEST_HEADER include/spdk/blob.h 00:03:00.302 TEST_HEADER include/spdk/config.h 00:03:00.302 TEST_HEADER include/spdk/cpuset.h 00:03:00.302 TEST_HEADER include/spdk/crc16.h 00:03:00.302 TEST_HEADER include/spdk/crc64.h 00:03:00.302 TEST_HEADER include/spdk/crc32.h 00:03:00.564 TEST_HEADER include/spdk/dif.h 00:03:00.564 TEST_HEADER include/spdk/endian.h 00:03:00.564 TEST_HEADER include/spdk/dma.h 00:03:00.564 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.564 TEST_HEADER include/spdk/env.h 00:03:00.564 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:00.564 TEST_HEADER include/spdk/event.h 00:03:00.564 TEST_HEADER include/spdk/fd_group.h 00:03:00.564 TEST_HEADER include/spdk/file.h 00:03:00.564 TEST_HEADER include/spdk/fd.h 00:03:00.564 TEST_HEADER include/spdk/fsdev.h 00:03:00.564 TEST_HEADER include/spdk/fsdev_module.h 00:03:00.564 CC app/spdk_dd/spdk_dd.o 00:03:00.564 TEST_HEADER include/spdk/ftl.h 00:03:00.564 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.564 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:00.564 CC app/nvmf_tgt/nvmf_main.o 00:03:00.564 TEST_HEADER include/spdk/hexlify.h 00:03:00.564 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.564 TEST_HEADER include/spdk/histogram_data.h 00:03:00.564 TEST_HEADER include/spdk/idxd.h 00:03:00.564 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.564 TEST_HEADER include/spdk/init.h 00:03:00.564 TEST_HEADER include/spdk/ioat.h 00:03:00.564 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.564 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.564 TEST_HEADER include/spdk/json.h 00:03:00.564 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.564 TEST_HEADER include/spdk/keyring_module.h 00:03:00.564 TEST_HEADER include/spdk/keyring.h 00:03:00.564 TEST_HEADER include/spdk/likely.h 00:03:00.564 TEST_HEADER include/spdk/log.h 00:03:00.564 TEST_HEADER include/spdk/lvol.h 00:03:00.564 TEST_HEADER include/spdk/md5.h 00:03:00.564 TEST_HEADER include/spdk/mmio.h 00:03:00.564 TEST_HEADER include/spdk/memory.h 00:03:00.564 CC app/spdk_tgt/spdk_tgt.o 00:03:00.564 TEST_HEADER include/spdk/nbd.h 00:03:00.564 TEST_HEADER include/spdk/net.h 00:03:00.564 TEST_HEADER include/spdk/notify.h 00:03:00.564 TEST_HEADER include/spdk/nvme.h 00:03:00.564 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.564 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.564 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.564 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.564 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.564 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.564 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.564 TEST_HEADER include/spdk/nvmf.h 00:03:00.564 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.564 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.564 TEST_HEADER include/spdk/opal.h 00:03:00.564 TEST_HEADER include/spdk/opal_spec.h 00:03:00.564 TEST_HEADER include/spdk/pci_ids.h 00:03:00.564 TEST_HEADER include/spdk/pipe.h 00:03:00.564 TEST_HEADER include/spdk/queue.h 00:03:00.564 TEST_HEADER include/spdk/reduce.h 00:03:00.564 TEST_HEADER include/spdk/rpc.h 00:03:00.564 TEST_HEADER include/spdk/scheduler.h 00:03:00.564 TEST_HEADER include/spdk/scsi.h 00:03:00.564 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.564 TEST_HEADER include/spdk/sock.h 00:03:00.564 TEST_HEADER include/spdk/stdinc.h 00:03:00.564 TEST_HEADER include/spdk/string.h 00:03:00.564 TEST_HEADER include/spdk/thread.h 00:03:00.564 TEST_HEADER include/spdk/trace.h 00:03:00.564 TEST_HEADER include/spdk/trace_parser.h 00:03:00.564 TEST_HEADER include/spdk/tree.h 00:03:00.564 TEST_HEADER include/spdk/ublk.h 00:03:00.564 TEST_HEADER include/spdk/util.h 00:03:00.564 TEST_HEADER include/spdk/uuid.h 00:03:00.564 TEST_HEADER include/spdk/version.h 00:03:00.564 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.564 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.564 TEST_HEADER include/spdk/vmd.h 00:03:00.564 TEST_HEADER include/spdk/vhost.h 00:03:00.564 TEST_HEADER include/spdk/xor.h 00:03:00.564 TEST_HEADER include/spdk/zipf.h 00:03:00.564 CXX test/cpp_headers/accel.o 00:03:00.564 CXX test/cpp_headers/accel_module.o 00:03:00.564 CXX test/cpp_headers/assert.o 00:03:00.564 CXX test/cpp_headers/barrier.o 00:03:00.564 CXX test/cpp_headers/base64.o 00:03:00.564 CXX test/cpp_headers/bdev_module.o 00:03:00.564 CXX test/cpp_headers/bdev.o 00:03:00.564 CXX test/cpp_headers/bdev_zone.o 00:03:00.564 CXX test/cpp_headers/bit_array.o 00:03:00.564 CXX test/cpp_headers/bit_pool.o 00:03:00.564 CXX test/cpp_headers/blob_bdev.o 00:03:00.564 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.564 CXX test/cpp_headers/blob.o 00:03:00.564 CXX test/cpp_headers/blobfs.o 00:03:00.564 CXX test/cpp_headers/conf.o 00:03:00.565 CXX test/cpp_headers/config.o 00:03:00.565 CXX test/cpp_headers/cpuset.o 00:03:00.565 CXX test/cpp_headers/crc16.o 00:03:00.565 CXX test/cpp_headers/crc32.o 00:03:00.565 CXX test/cpp_headers/crc64.o 00:03:00.565 CXX test/cpp_headers/endian.o 00:03:00.565 CXX test/cpp_headers/dif.o 00:03:00.565 CXX test/cpp_headers/dma.o 00:03:00.565 CXX test/cpp_headers/env_dpdk.o 00:03:00.565 CXX test/cpp_headers/fd_group.o 00:03:00.565 CXX test/cpp_headers/env.o 00:03:00.565 CXX test/cpp_headers/event.o 00:03:00.565 CXX test/cpp_headers/fd.o 00:03:00.565 CXX test/cpp_headers/file.o 00:03:00.565 CC examples/util/zipf/zipf.o 00:03:00.565 CXX test/cpp_headers/fsdev.o 00:03:00.565 CXX test/cpp_headers/fsdev_module.o 00:03:00.565 CXX test/cpp_headers/ftl.o 00:03:00.565 CXX test/cpp_headers/hexlify.o 00:03:00.565 CXX test/cpp_headers/fuse_dispatcher.o 00:03:00.565 CXX test/cpp_headers/gpt_spec.o 00:03:00.565 CXX test/cpp_headers/histogram_data.o 00:03:00.565 CXX test/cpp_headers/idxd.o 00:03:00.565 CXX test/cpp_headers/idxd_spec.o 00:03:00.565 CXX test/cpp_headers/ioat.o 00:03:00.565 CC examples/ioat/perf/perf.o 00:03:00.565 CXX test/cpp_headers/init.o 00:03:00.565 CXX test/cpp_headers/json.o 00:03:00.565 CXX test/cpp_headers/ioat_spec.o 00:03:00.565 CXX test/cpp_headers/keyring.o 00:03:00.565 CXX test/cpp_headers/jsonrpc.o 00:03:00.565 CXX test/cpp_headers/iscsi_spec.o 00:03:00.565 CXX test/cpp_headers/likely.o 00:03:00.565 CXX test/cpp_headers/log.o 00:03:00.565 CXX test/cpp_headers/keyring_module.o 00:03:00.565 CXX test/cpp_headers/lvol.o 00:03:00.565 CXX test/cpp_headers/mmio.o 00:03:00.565 CC examples/ioat/verify/verify.o 00:03:00.565 CXX test/cpp_headers/memory.o 00:03:00.565 CXX test/cpp_headers/net.o 00:03:00.565 CXX test/cpp_headers/md5.o 00:03:00.565 CXX test/cpp_headers/notify.o 00:03:00.565 CXX test/cpp_headers/nvme_intel.o 00:03:00.565 CXX test/cpp_headers/nbd.o 00:03:00.565 CC test/thread/poller_perf/poller_perf.o 00:03:00.565 CXX test/cpp_headers/nvme.o 00:03:00.565 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.565 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.565 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.565 LINK spdk_lspci 00:03:00.565 CXX test/cpp_headers/nvme_spec.o 00:03:00.565 CXX test/cpp_headers/nvme_zns.o 00:03:00.565 CC test/app/jsoncat/jsoncat.o 00:03:00.565 CXX test/cpp_headers/nvmf.o 00:03:00.565 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.565 CXX test/cpp_headers/nvmf_spec.o 00:03:00.565 CXX test/cpp_headers/nvmf_transport.o 00:03:00.565 CXX test/cpp_headers/opal_spec.o 00:03:00.565 CC test/app/histogram_perf/histogram_perf.o 00:03:00.565 CXX test/cpp_headers/opal.o 00:03:00.565 CXX test/cpp_headers/pci_ids.o 00:03:00.565 CC test/dma/test_dma/test_dma.o 00:03:00.565 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.565 CC test/env/vtophys/vtophys.o 00:03:00.565 CXX test/cpp_headers/pipe.o 00:03:00.565 CXX test/cpp_headers/queue.o 00:03:00.565 CXX test/cpp_headers/reduce.o 00:03:00.565 CXX test/cpp_headers/rpc.o 00:03:00.833 CXX test/cpp_headers/scheduler.o 00:03:00.833 CXX test/cpp_headers/scsi_spec.o 00:03:00.833 CXX test/cpp_headers/scsi.o 00:03:00.833 CXX test/cpp_headers/sock.o 00:03:00.833 CC test/env/memory/memory_ut.o 00:03:00.833 CXX test/cpp_headers/stdinc.o 00:03:00.833 CXX test/cpp_headers/string.o 00:03:00.833 CXX test/cpp_headers/trace.o 00:03:00.833 CXX test/cpp_headers/thread.o 00:03:00.833 CXX test/cpp_headers/trace_parser.o 00:03:00.833 CXX test/cpp_headers/tree.o 00:03:00.833 CXX test/cpp_headers/ublk.o 00:03:00.833 CC test/app/stub/stub.o 00:03:00.833 CXX test/cpp_headers/util.o 00:03:00.833 CC test/app/bdev_svc/bdev_svc.o 00:03:00.833 CXX test/cpp_headers/uuid.o 00:03:00.833 CXX test/cpp_headers/vfio_user_spec.o 00:03:00.833 CXX test/cpp_headers/version.o 00:03:00.833 CXX test/cpp_headers/vfio_user_pci.o 00:03:00.833 CC test/env/pci/pci_ut.o 00:03:00.833 CXX test/cpp_headers/vmd.o 00:03:00.833 CXX test/cpp_headers/vhost.o 00:03:00.833 CXX test/cpp_headers/xor.o 00:03:00.833 CXX test/cpp_headers/zipf.o 00:03:00.833 CC app/fio/nvme/fio_plugin.o 00:03:00.833 LINK rpc_client_test 00:03:00.833 CC app/fio/bdev/fio_plugin.o 00:03:00.833 LINK spdk_nvme_discover 00:03:00.833 LINK interrupt_tgt 00:03:01.102 LINK nvmf_tgt 00:03:01.102 LINK iscsi_tgt 00:03:01.102 LINK spdk_tgt 00:03:01.102 LINK spdk_trace_record 00:03:01.363 LINK histogram_perf 00:03:01.363 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:01.363 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:01.363 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:01.363 CC test/env/mem_callbacks/mem_callbacks.o 00:03:01.363 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:01.363 LINK spdk_dd 00:03:01.363 LINK spdk_trace 00:03:01.622 LINK zipf 00:03:01.622 LINK verify 00:03:01.622 LINK poller_perf 00:03:01.622 LINK jsoncat 00:03:01.622 LINK bdev_svc 00:03:01.622 LINK vtophys 00:03:01.622 LINK ioat_perf 00:03:01.622 LINK env_dpdk_post_init 00:03:01.882 LINK stub 00:03:01.882 LINK spdk_nvme_perf 00:03:01.882 CC app/vhost/vhost.o 00:03:01.882 LINK pci_ut 00:03:02.143 LINK nvme_fuzz 00:03:02.143 LINK vhost_fuzz 00:03:02.143 CC examples/vmd/led/led.o 00:03:02.143 CC examples/idxd/perf/perf.o 00:03:02.143 CC examples/vmd/lsvmd/lsvmd.o 00:03:02.143 CC examples/sock/hello_world/hello_sock.o 00:03:02.143 LINK spdk_nvme 00:03:02.143 CC examples/thread/thread/thread_ex.o 00:03:02.143 LINK test_dma 00:03:02.143 CC test/event/reactor_perf/reactor_perf.o 00:03:02.143 LINK spdk_bdev 00:03:02.143 CC test/event/reactor/reactor.o 00:03:02.143 LINK spdk_nvme_identify 00:03:02.143 CC test/event/event_perf/event_perf.o 00:03:02.143 LINK vhost 00:03:02.143 CC test/event/app_repeat/app_repeat.o 00:03:02.143 LINK mem_callbacks 00:03:02.143 CC test/event/scheduler/scheduler.o 00:03:02.404 LINK spdk_top 00:03:02.404 LINK led 00:03:02.404 LINK lsvmd 00:03:02.404 LINK reactor_perf 00:03:02.404 LINK reactor 00:03:02.404 LINK event_perf 00:03:02.404 LINK app_repeat 00:03:02.404 LINK hello_sock 00:03:02.404 LINK thread 00:03:02.404 LINK idxd_perf 00:03:02.665 LINK scheduler 00:03:02.665 LINK memory_ut 00:03:02.665 CC test/nvme/overhead/overhead.o 00:03:02.926 CC test/nvme/reset/reset.o 00:03:02.926 CC test/nvme/fdp/fdp.o 00:03:02.926 CC test/nvme/boot_partition/boot_partition.o 00:03:02.926 CC test/nvme/simple_copy/simple_copy.o 00:03:02.926 CC test/nvme/aer/aer.o 00:03:02.926 CC test/nvme/startup/startup.o 00:03:02.926 CC test/nvme/reserve/reserve.o 00:03:02.926 CC test/nvme/sgl/sgl.o 00:03:02.926 CC test/nvme/cuse/cuse.o 00:03:02.926 CC test/nvme/e2edp/nvme_dp.o 00:03:02.926 CC test/nvme/compliance/nvme_compliance.o 00:03:02.926 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:02.926 CC test/nvme/err_injection/err_injection.o 00:03:02.926 CC test/nvme/fused_ordering/fused_ordering.o 00:03:02.926 CC test/nvme/connect_stress/connect_stress.o 00:03:02.926 CC test/accel/dif/dif.o 00:03:02.926 CC test/blobfs/mkfs/mkfs.o 00:03:02.926 CC test/lvol/esnap/esnap.o 00:03:02.926 CC examples/nvme/reconnect/reconnect.o 00:03:02.926 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:02.926 CC examples/nvme/hello_world/hello_world.o 00:03:02.926 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.926 LINK boot_partition 00:03:02.926 CC examples/nvme/arbitration/arbitration.o 00:03:03.186 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.186 CC examples/nvme/abort/abort.o 00:03:03.186 CC examples/nvme/hotplug/hotplug.o 00:03:03.186 LINK startup 00:03:03.186 LINK connect_stress 00:03:03.186 LINK simple_copy 00:03:03.186 LINK doorbell_aers 00:03:03.186 LINK reserve 00:03:03.186 LINK err_injection 00:03:03.186 CC examples/accel/perf/accel_perf.o 00:03:03.186 LINK fused_ordering 00:03:03.186 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:03.186 LINK mkfs 00:03:03.186 LINK overhead 00:03:03.186 CC examples/blob/cli/blobcli.o 00:03:03.186 CC examples/blob/hello_world/hello_blob.o 00:03:03.186 LINK iscsi_fuzz 00:03:03.186 LINK reset 00:03:03.186 LINK sgl 00:03:03.186 LINK aer 00:03:03.186 LINK nvme_dp 00:03:03.186 LINK nvme_compliance 00:03:03.186 LINK fdp 00:03:03.186 LINK cmb_copy 00:03:03.186 LINK pmr_persistence 00:03:03.447 LINK hello_world 00:03:03.447 LINK hotplug 00:03:03.447 LINK reconnect 00:03:03.447 LINK hello_fsdev 00:03:03.447 LINK hello_blob 00:03:03.447 LINK arbitration 00:03:03.447 LINK abort 00:03:03.447 LINK nvme_manage 00:03:03.447 LINK dif 00:03:03.709 LINK accel_perf 00:03:03.709 LINK blobcli 00:03:03.969 LINK cuse 00:03:04.230 CC test/bdev/bdevio/bdevio.o 00:03:04.230 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.230 CC examples/bdev/bdevperf/bdevperf.o 00:03:04.491 LINK hello_bdev 00:03:04.491 LINK bdevio 00:03:05.064 LINK bdevperf 00:03:05.635 CC examples/nvmf/nvmf/nvmf.o 00:03:05.896 LINK nvmf 00:03:07.809 LINK esnap 00:03:07.809 00:03:07.809 real 0m55.892s 00:03:07.809 user 8m6.081s 00:03:07.809 sys 5m34.103s 00:03:07.809 16:45:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:07.809 16:45:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.809 ************************************ 00:03:07.809 END TEST make 00:03:07.809 ************************************ 00:03:07.809 16:45:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:07.809 16:45:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:07.809 16:45:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:07.809 16:45:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.809 16:45:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:07.809 16:45:59 -- pm/common@44 -- $ pid=1634335 00:03:07.809 16:45:59 -- pm/common@50 -- $ kill -TERM 1634335 00:03:07.809 16:45:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.809 16:45:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:07.809 16:45:59 -- pm/common@44 -- $ pid=1634336 00:03:07.809 16:45:59 -- pm/common@50 -- $ kill -TERM 1634336 00:03:07.809 16:45:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.809 16:45:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:07.809 16:45:59 -- pm/common@44 -- $ pid=1634338 00:03:07.809 16:45:59 -- pm/common@50 -- $ kill -TERM 1634338 00:03:07.809 16:45:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.809 16:45:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:07.809 16:45:59 -- pm/common@44 -- $ pid=1634361 00:03:07.809 16:45:59 -- pm/common@50 -- $ sudo -E kill -TERM 1634361 00:03:08.070 16:46:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:08.070 16:46:00 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:08.070 16:46:00 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:08.070 16:46:00 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:08.070 16:46:00 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:08.070 16:46:00 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:08.070 16:46:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:08.070 16:46:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:08.071 16:46:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:08.071 16:46:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.071 16:46:00 -- scripts/common.sh@336 -- # read -ra ver1 00:03:08.071 16:46:00 -- scripts/common.sh@337 -- # IFS=.-: 00:03:08.071 16:46:00 -- scripts/common.sh@337 -- # read -ra ver2 00:03:08.071 16:46:00 -- scripts/common.sh@338 -- # local 'op=<' 00:03:08.071 16:46:00 -- scripts/common.sh@340 -- # ver1_l=2 00:03:08.071 16:46:00 -- scripts/common.sh@341 -- # ver2_l=1 00:03:08.071 16:46:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:08.071 16:46:00 -- scripts/common.sh@344 -- # case "$op" in 00:03:08.071 16:46:00 -- scripts/common.sh@345 -- # : 1 00:03:08.071 16:46:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:08.071 16:46:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.071 16:46:00 -- scripts/common.sh@365 -- # decimal 1 00:03:08.071 16:46:00 -- scripts/common.sh@353 -- # local d=1 00:03:08.071 16:46:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.071 16:46:00 -- scripts/common.sh@355 -- # echo 1 00:03:08.071 16:46:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:08.071 16:46:00 -- scripts/common.sh@366 -- # decimal 2 00:03:08.071 16:46:00 -- scripts/common.sh@353 -- # local d=2 00:03:08.071 16:46:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.071 16:46:00 -- scripts/common.sh@355 -- # echo 2 00:03:08.071 16:46:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:08.071 16:46:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:08.071 16:46:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:08.071 16:46:00 -- scripts/common.sh@368 -- # return 0 00:03:08.071 16:46:00 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.071 16:46:00 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:08.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.071 --rc genhtml_branch_coverage=1 00:03:08.071 --rc genhtml_function_coverage=1 00:03:08.071 --rc genhtml_legend=1 00:03:08.071 --rc geninfo_all_blocks=1 00:03:08.071 --rc geninfo_unexecuted_blocks=1 00:03:08.071 00:03:08.071 ' 00:03:08.071 16:46:00 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:08.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.071 --rc genhtml_branch_coverage=1 00:03:08.071 --rc genhtml_function_coverage=1 00:03:08.071 --rc genhtml_legend=1 00:03:08.071 --rc geninfo_all_blocks=1 00:03:08.071 --rc geninfo_unexecuted_blocks=1 00:03:08.071 00:03:08.071 ' 00:03:08.071 16:46:00 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:08.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.071 --rc genhtml_branch_coverage=1 00:03:08.071 --rc genhtml_function_coverage=1 00:03:08.071 --rc genhtml_legend=1 00:03:08.071 --rc geninfo_all_blocks=1 00:03:08.071 --rc geninfo_unexecuted_blocks=1 00:03:08.071 00:03:08.071 ' 00:03:08.071 16:46:00 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:08.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.071 --rc genhtml_branch_coverage=1 00:03:08.071 --rc genhtml_function_coverage=1 00:03:08.071 --rc genhtml_legend=1 00:03:08.071 --rc geninfo_all_blocks=1 00:03:08.071 --rc geninfo_unexecuted_blocks=1 00:03:08.071 00:03:08.071 ' 00:03:08.071 16:46:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:08.071 16:46:00 -- nvmf/common.sh@7 -- # uname -s 00:03:08.071 16:46:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:08.071 16:46:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:08.071 16:46:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:08.071 16:46:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:08.071 16:46:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:08.071 16:46:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:08.071 16:46:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:08.071 16:46:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:08.071 16:46:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:08.071 16:46:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:08.071 16:46:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:08.071 16:46:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:08.071 16:46:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:08.071 16:46:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:08.071 16:46:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:08.071 16:46:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:08.071 16:46:00 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:08.071 16:46:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:08.071 16:46:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:08.071 16:46:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.071 16:46:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.071 16:46:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.071 16:46:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.071 16:46:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.071 16:46:00 -- paths/export.sh@5 -- # export PATH 00:03:08.071 16:46:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.071 16:46:00 -- nvmf/common.sh@51 -- # : 0 00:03:08.071 16:46:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:08.071 16:46:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:08.071 16:46:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:08.071 16:46:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:08.071 16:46:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:08.071 16:46:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:08.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:08.071 16:46:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:08.071 16:46:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:08.071 16:46:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:08.071 16:46:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:08.071 16:46:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:08.332 16:46:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:08.332 16:46:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:08.332 16:46:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:08.332 16:46:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:08.332 16:46:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:08.332 16:46:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:08.332 16:46:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:08.332 16:46:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:08.332 16:46:00 -- spdk/autotest.sh@48 -- # udevadm_pid=1700446 00:03:08.332 16:46:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:08.332 16:46:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:08.332 16:46:00 -- pm/common@17 -- # local monitor 00:03:08.332 16:46:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.332 16:46:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.332 16:46:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.332 16:46:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.332 16:46:00 -- pm/common@21 -- # date +%s 00:03:08.332 16:46:00 -- pm/common@21 -- # date +%s 00:03:08.332 16:46:00 -- pm/common@25 -- # sleep 1 00:03:08.332 16:46:00 -- pm/common@21 -- # date +%s 00:03:08.332 16:46:00 -- pm/common@21 -- # date +%s 00:03:08.332 16:46:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732117560 00:03:08.332 16:46:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732117560 00:03:08.332 16:46:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732117560 00:03:08.332 16:46:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732117560 00:03:08.332 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732117560_collect-cpu-load.pm.log 00:03:08.332 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732117560_collect-vmstat.pm.log 00:03:08.332 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732117560_collect-cpu-temp.pm.log 00:03:08.332 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732117560_collect-bmc-pm.bmc.pm.log 00:03:09.270 16:46:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:09.270 16:46:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:09.270 16:46:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:09.270 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:03:09.270 16:46:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:09.270 16:46:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:09.270 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:03:09.270 16:46:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:09.270 16:46:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.270 16:46:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.270 16:46:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:09.270 16:46:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.270 16:46:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:09.270 16:46:01 -- common/autotest_common.sh@1457 -- # uname 00:03:09.270 16:46:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:09.270 16:46:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:09.270 16:46:01 -- common/autotest_common.sh@1477 -- # uname 00:03:09.270 16:46:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:09.270 16:46:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:09.270 16:46:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:09.270 lcov: LCOV version 1.15 00:03:09.270 16:46:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:35.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.846 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.055 16:46:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:40.055 16:46:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.055 16:46:31 -- common/autotest_common.sh@10 -- # set +x 00:03:40.055 16:46:31 -- spdk/autotest.sh@78 -- # rm -f 00:03:40.055 16:46:31 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.366 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:43.366 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:43.366 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:43.627 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:43.890 16:46:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:43.890 16:46:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:43.890 16:46:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:43.890 16:46:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:43.890 16:46:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:43.890 16:46:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:43.890 16:46:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:43.890 16:46:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.890 16:46:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:43.890 16:46:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:43.890 16:46:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.890 16:46:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:43.890 16:46:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:43.890 16:46:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:43.890 16:46:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:43.890 No valid GPT data, bailing 00:03:43.890 16:46:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.152 16:46:36 -- scripts/common.sh@394 -- # pt= 00:03:44.152 16:46:36 -- scripts/common.sh@395 -- # return 1 00:03:44.152 16:46:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:44.152 1+0 records in 00:03:44.152 1+0 records out 00:03:44.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459498 s, 228 MB/s 00:03:44.152 16:46:36 -- spdk/autotest.sh@105 -- # sync 00:03:44.152 16:46:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:44.152 16:46:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:44.152 16:46:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:54.158 16:46:44 -- spdk/autotest.sh@111 -- # uname -s 00:03:54.158 16:46:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:54.158 16:46:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:54.158 16:46:44 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:56.072 Hugepages 00:03:56.072 node hugesize free / total 00:03:56.072 node0 1048576kB 0 / 0 00:03:56.072 node0 2048kB 0 / 0 00:03:56.072 node1 1048576kB 0 / 0 00:03:56.072 node1 2048kB 0 / 0 00:03:56.072 00:03:56.072 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.072 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:56.072 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:56.340 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:56.340 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:56.340 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:56.340 16:46:48 -- spdk/autotest.sh@117 -- # uname -s 00:03:56.340 16:46:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:56.340 16:46:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:56.340 16:46:48 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.777 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:59.777 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:00.037 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:00.037 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:00.037 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:01.948 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:01.948 16:46:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:02.889 16:46:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:02.889 16:46:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:02.889 16:46:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.149 16:46:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:03.149 16:46:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.149 16:46:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.149 16:46:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.149 16:46:55 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.149 16:46:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.149 16:46:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:03.149 16:46:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:03.149 16:46:55 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.447 Waiting for block devices as requested 00:04:06.447 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:06.708 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:06.708 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:06.708 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:06.968 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:06.968 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:06.968 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:07.229 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:07.229 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:07.489 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:07.489 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:07.489 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:07.749 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:07.749 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:07.749 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:08.030 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:08.030 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:08.297 16:47:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:08.297 16:47:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:08.297 16:47:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:08.297 16:47:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:08.297 16:47:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:08.297 16:47:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:08.297 16:47:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:08.297 16:47:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:08.297 16:47:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:08.297 16:47:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:08.297 16:47:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:08.297 16:47:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:08.297 16:47:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:08.297 16:47:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:08.297 16:47:00 -- common/autotest_common.sh@1543 -- # continue 00:04:08.297 16:47:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:08.297 16:47:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.297 16:47:00 -- common/autotest_common.sh@10 -- # set +x 00:04:08.297 16:47:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:08.297 16:47:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.297 16:47:00 -- common/autotest_common.sh@10 -- # set +x 00:04:08.297 16:47:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.502 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:12.502 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:12.502 16:47:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:12.502 16:47:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.502 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.502 16:47:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:12.502 16:47:04 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:12.502 16:47:04 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.502 16:47:04 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:12.502 16:47:04 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:12.502 16:47:04 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:12.502 16:47:04 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:12.502 16:47:04 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:12.503 16:47:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:12.503 16:47:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:12.503 16:47:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.503 16:47:04 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.503 16:47:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:12.503 16:47:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:12.503 16:47:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:12.503 16:47:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.503 16:47:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:12.503 16:47:04 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:12.503 16:47:04 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:12.503 16:47:04 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:12.503 16:47:04 -- common/autotest_common.sh@1572 -- # return 0 00:04:12.503 16:47:04 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:12.503 16:47:04 -- common/autotest_common.sh@1580 -- # return 0 00:04:12.503 16:47:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:12.503 16:47:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:12.503 16:47:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.503 16:47:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.503 16:47:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:12.503 16:47:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.503 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.503 16:47:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:12.503 16:47:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:12.503 16:47:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.503 16:47:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.503 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.503 ************************************ 00:04:12.503 START TEST env 00:04:12.503 ************************************ 00:04:12.503 16:47:04 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:12.764 * Looking for test storage... 00:04:12.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.764 16:47:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.764 16:47:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.764 16:47:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.764 16:47:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.764 16:47:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.764 16:47:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.764 16:47:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.764 16:47:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.764 16:47:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.764 16:47:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.764 16:47:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.764 16:47:04 env -- scripts/common.sh@344 -- # case "$op" in 00:04:12.764 16:47:04 env -- scripts/common.sh@345 -- # : 1 00:04:12.764 16:47:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.764 16:47:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.764 16:47:04 env -- scripts/common.sh@365 -- # decimal 1 00:04:12.764 16:47:04 env -- scripts/common.sh@353 -- # local d=1 00:04:12.764 16:47:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.764 16:47:04 env -- scripts/common.sh@355 -- # echo 1 00:04:12.764 16:47:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.764 16:47:04 env -- scripts/common.sh@366 -- # decimal 2 00:04:12.764 16:47:04 env -- scripts/common.sh@353 -- # local d=2 00:04:12.764 16:47:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.764 16:47:04 env -- scripts/common.sh@355 -- # echo 2 00:04:12.764 16:47:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.764 16:47:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.764 16:47:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.764 16:47:04 env -- scripts/common.sh@368 -- # return 0 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.764 --rc genhtml_branch_coverage=1 00:04:12.764 --rc genhtml_function_coverage=1 00:04:12.764 --rc genhtml_legend=1 00:04:12.764 --rc geninfo_all_blocks=1 00:04:12.764 --rc geninfo_unexecuted_blocks=1 00:04:12.764 00:04:12.764 ' 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.764 --rc genhtml_branch_coverage=1 00:04:12.764 --rc genhtml_function_coverage=1 00:04:12.764 --rc genhtml_legend=1 00:04:12.764 --rc geninfo_all_blocks=1 00:04:12.764 --rc geninfo_unexecuted_blocks=1 00:04:12.764 00:04:12.764 ' 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.764 --rc genhtml_branch_coverage=1 00:04:12.764 --rc genhtml_function_coverage=1 00:04:12.764 --rc genhtml_legend=1 00:04:12.764 --rc geninfo_all_blocks=1 00:04:12.764 --rc geninfo_unexecuted_blocks=1 00:04:12.764 00:04:12.764 ' 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.764 --rc genhtml_branch_coverage=1 00:04:12.764 --rc genhtml_function_coverage=1 00:04:12.764 --rc genhtml_legend=1 00:04:12.764 --rc geninfo_all_blocks=1 00:04:12.764 --rc geninfo_unexecuted_blocks=1 00:04:12.764 00:04:12.764 ' 00:04:12.764 16:47:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.764 16:47:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.764 16:47:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.764 ************************************ 00:04:12.764 START TEST env_memory 00:04:12.764 ************************************ 00:04:12.764 16:47:04 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:12.764 00:04:12.764 00:04:12.764 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.764 http://cunit.sourceforge.net/ 00:04:12.764 00:04:12.764 00:04:12.764 Suite: memory 00:04:13.025 Test: alloc and free memory map ...[2024-11-20 16:47:04.947794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:13.025 passed 00:04:13.025 Test: mem map translation ...[2024-11-20 16:47:04.973265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:13.025 [2024-11-20 16:47:04.973291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:13.025 [2024-11-20 16:47:04.973337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:13.025 [2024-11-20 16:47:04.973345] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:13.025 passed 00:04:13.025 Test: mem map registration ...[2024-11-20 16:47:05.028445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:13.025 [2024-11-20 16:47:05.028476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:13.025 passed 00:04:13.025 Test: mem map adjacent registrations ...passed 00:04:13.025 00:04:13.025 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.025 suites 1 1 n/a 0 0 00:04:13.025 tests 4 4 4 0 0 00:04:13.025 asserts 152 152 152 0 n/a 00:04:13.025 00:04:13.025 Elapsed time = 0.191 seconds 00:04:13.025 00:04:13.025 real 0m0.206s 00:04:13.025 user 0m0.194s 00:04:13.025 sys 0m0.011s 00:04:13.025 16:47:05 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.025 16:47:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:13.025 ************************************ 00:04:13.025 END TEST env_memory 00:04:13.025 ************************************ 00:04:13.025 16:47:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:13.025 16:47:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.025 16:47:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.025 16:47:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.025 ************************************ 00:04:13.025 START TEST env_vtophys 00:04:13.025 ************************************ 00:04:13.025 16:47:05 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:13.286 EAL: lib.eal log level changed from notice to debug 00:04:13.286 EAL: Detected lcore 0 as core 0 on socket 0 00:04:13.286 EAL: Detected lcore 1 as core 1 on socket 0 00:04:13.286 EAL: Detected lcore 2 as core 2 on socket 0 00:04:13.286 EAL: Detected lcore 3 as core 3 on socket 0 00:04:13.286 EAL: Detected lcore 4 as core 4 on socket 0 00:04:13.286 EAL: Detected lcore 5 as core 5 on socket 0 00:04:13.286 EAL: Detected lcore 6 as core 6 on socket 0 00:04:13.286 EAL: Detected lcore 7 as core 7 on socket 0 00:04:13.286 EAL: Detected lcore 8 as core 8 on socket 0 00:04:13.286 EAL: Detected lcore 9 as core 9 on socket 0 00:04:13.286 EAL: Detected lcore 10 as core 10 on socket 0 00:04:13.286 EAL: Detected lcore 11 as core 11 on socket 0 00:04:13.286 EAL: Detected lcore 12 as core 12 on socket 0 00:04:13.286 EAL: Detected lcore 13 as core 13 on socket 0 00:04:13.286 EAL: Detected lcore 14 as core 14 on socket 0 00:04:13.286 EAL: Detected lcore 15 as core 15 on socket 0 00:04:13.286 EAL: Detected lcore 16 as core 16 on socket 0 00:04:13.286 EAL: Detected lcore 17 as core 17 on socket 0 00:04:13.286 EAL: Detected lcore 18 as core 18 on socket 0 00:04:13.286 EAL: Detected lcore 19 as core 19 on socket 0 00:04:13.286 EAL: Detected lcore 20 as core 20 on socket 0 00:04:13.286 EAL: Detected lcore 21 as core 21 on socket 0 00:04:13.286 EAL: Detected lcore 22 as core 22 on socket 0 00:04:13.286 EAL: Detected lcore 23 as core 23 on socket 0 00:04:13.286 EAL: Detected lcore 24 as core 24 on socket 0 00:04:13.286 EAL: Detected lcore 25 as core 25 on socket 0 00:04:13.286 EAL: Detected lcore 26 as core 26 on socket 0 00:04:13.286 EAL: Detected lcore 27 as core 27 on socket 0 00:04:13.286 EAL: Detected lcore 28 as core 28 on socket 0 00:04:13.286 EAL: Detected lcore 29 as core 29 on socket 0 00:04:13.286 EAL: Detected lcore 30 as core 30 on socket 0 00:04:13.286 EAL: Detected lcore 31 as core 31 on socket 0 00:04:13.286 EAL: Detected lcore 32 as core 32 on socket 0 00:04:13.286 EAL: Detected lcore 33 as core 33 on socket 0 00:04:13.286 EAL: Detected lcore 34 as core 34 on socket 0 00:04:13.286 EAL: Detected lcore 35 as core 35 on socket 0 00:04:13.286 EAL: Detected lcore 36 as core 0 on socket 1 00:04:13.286 EAL: Detected lcore 37 as core 1 on socket 1 00:04:13.286 EAL: Detected lcore 38 as core 2 on socket 1 00:04:13.286 EAL: Detected lcore 39 as core 3 on socket 1 00:04:13.286 EAL: Detected lcore 40 as core 4 on socket 1 00:04:13.286 EAL: Detected lcore 41 as core 5 on socket 1 00:04:13.286 EAL: Detected lcore 42 as core 6 on socket 1 00:04:13.286 EAL: Detected lcore 43 as core 7 on socket 1 00:04:13.286 EAL: Detected lcore 44 as core 8 on socket 1 00:04:13.286 EAL: Detected lcore 45 as core 9 on socket 1 00:04:13.286 EAL: Detected lcore 46 as core 10 on socket 1 00:04:13.286 EAL: Detected lcore 47 as core 11 on socket 1 00:04:13.286 EAL: Detected lcore 48 as core 12 on socket 1 00:04:13.286 EAL: Detected lcore 49 as core 13 on socket 1 00:04:13.286 EAL: Detected lcore 50 as core 14 on socket 1 00:04:13.286 EAL: Detected lcore 51 as core 15 on socket 1 00:04:13.286 EAL: Detected lcore 52 as core 16 on socket 1 00:04:13.286 EAL: Detected lcore 53 as core 17 on socket 1 00:04:13.286 EAL: Detected lcore 54 as core 18 on socket 1 00:04:13.286 EAL: Detected lcore 55 as core 19 on socket 1 00:04:13.286 EAL: Detected lcore 56 as core 20 on socket 1 00:04:13.287 EAL: Detected lcore 57 as core 21 on socket 1 00:04:13.287 EAL: Detected lcore 58 as core 22 on socket 1 00:04:13.287 EAL: Detected lcore 59 as core 23 on socket 1 00:04:13.287 EAL: Detected lcore 60 as core 24 on socket 1 00:04:13.287 EAL: Detected lcore 61 as core 25 on socket 1 00:04:13.287 EAL: Detected lcore 62 as core 26 on socket 1 00:04:13.287 EAL: Detected lcore 63 as core 27 on socket 1 00:04:13.287 EAL: Detected lcore 64 as core 28 on socket 1 00:04:13.287 EAL: Detected lcore 65 as core 29 on socket 1 00:04:13.287 EAL: Detected lcore 66 as core 30 on socket 1 00:04:13.287 EAL: Detected lcore 67 as core 31 on socket 1 00:04:13.287 EAL: Detected lcore 68 as core 32 on socket 1 00:04:13.287 EAL: Detected lcore 69 as core 33 on socket 1 00:04:13.287 EAL: Detected lcore 70 as core 34 on socket 1 00:04:13.287 EAL: Detected lcore 71 as core 35 on socket 1 00:04:13.287 EAL: Detected lcore 72 as core 0 on socket 0 00:04:13.287 EAL: Detected lcore 73 as core 1 on socket 0 00:04:13.287 EAL: Detected lcore 74 as core 2 on socket 0 00:04:13.287 EAL: Detected lcore 75 as core 3 on socket 0 00:04:13.287 EAL: Detected lcore 76 as core 4 on socket 0 00:04:13.287 EAL: Detected lcore 77 as core 5 on socket 0 00:04:13.287 EAL: Detected lcore 78 as core 6 on socket 0 00:04:13.287 EAL: Detected lcore 79 as core 7 on socket 0 00:04:13.287 EAL: Detected lcore 80 as core 8 on socket 0 00:04:13.287 EAL: Detected lcore 81 as core 9 on socket 0 00:04:13.287 EAL: Detected lcore 82 as core 10 on socket 0 00:04:13.287 EAL: Detected lcore 83 as core 11 on socket 0 00:04:13.287 EAL: Detected lcore 84 as core 12 on socket 0 00:04:13.287 EAL: Detected lcore 85 as core 13 on socket 0 00:04:13.287 EAL: Detected lcore 86 as core 14 on socket 0 00:04:13.287 EAL: Detected lcore 87 as core 15 on socket 0 00:04:13.287 EAL: Detected lcore 88 as core 16 on socket 0 00:04:13.287 EAL: Detected lcore 89 as core 17 on socket 0 00:04:13.287 EAL: Detected lcore 90 as core 18 on socket 0 00:04:13.287 EAL: Detected lcore 91 as core 19 on socket 0 00:04:13.287 EAL: Detected lcore 92 as core 20 on socket 0 00:04:13.287 EAL: Detected lcore 93 as core 21 on socket 0 00:04:13.287 EAL: Detected lcore 94 as core 22 on socket 0 00:04:13.287 EAL: Detected lcore 95 as core 23 on socket 0 00:04:13.287 EAL: Detected lcore 96 as core 24 on socket 0 00:04:13.287 EAL: Detected lcore 97 as core 25 on socket 0 00:04:13.287 EAL: Detected lcore 98 as core 26 on socket 0 00:04:13.287 EAL: Detected lcore 99 as core 27 on socket 0 00:04:13.287 EAL: Detected lcore 100 as core 28 on socket 0 00:04:13.287 EAL: Detected lcore 101 as core 29 on socket 0 00:04:13.287 EAL: Detected lcore 102 as core 30 on socket 0 00:04:13.287 EAL: Detected lcore 103 as core 31 on socket 0 00:04:13.287 EAL: Detected lcore 104 as core 32 on socket 0 00:04:13.287 EAL: Detected lcore 105 as core 33 on socket 0 00:04:13.287 EAL: Detected lcore 106 as core 34 on socket 0 00:04:13.287 EAL: Detected lcore 107 as core 35 on socket 0 00:04:13.287 EAL: Detected lcore 108 as core 0 on socket 1 00:04:13.287 EAL: Detected lcore 109 as core 1 on socket 1 00:04:13.287 EAL: Detected lcore 110 as core 2 on socket 1 00:04:13.287 EAL: Detected lcore 111 as core 3 on socket 1 00:04:13.287 EAL: Detected lcore 112 as core 4 on socket 1 00:04:13.287 EAL: Detected lcore 113 as core 5 on socket 1 00:04:13.287 EAL: Detected lcore 114 as core 6 on socket 1 00:04:13.287 EAL: Detected lcore 115 as core 7 on socket 1 00:04:13.287 EAL: Detected lcore 116 as core 8 on socket 1 00:04:13.287 EAL: Detected lcore 117 as core 9 on socket 1 00:04:13.287 EAL: Detected lcore 118 as core 10 on socket 1 00:04:13.287 EAL: Detected lcore 119 as core 11 on socket 1 00:04:13.287 EAL: Detected lcore 120 as core 12 on socket 1 00:04:13.287 EAL: Detected lcore 121 as core 13 on socket 1 00:04:13.287 EAL: Detected lcore 122 as core 14 on socket 1 00:04:13.287 EAL: Detected lcore 123 as core 15 on socket 1 00:04:13.287 EAL: Detected lcore 124 as core 16 on socket 1 00:04:13.287 EAL: Detected lcore 125 as core 17 on socket 1 00:04:13.287 EAL: Detected lcore 126 as core 18 on socket 1 00:04:13.287 EAL: Detected lcore 127 as core 19 on socket 1 00:04:13.287 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:13.287 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:13.287 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:13.287 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:13.287 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:13.287 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:13.287 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:13.287 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:13.287 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:13.287 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:13.287 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:13.287 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:13.287 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:13.287 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:13.287 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:13.287 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:13.287 EAL: Maximum logical cores by configuration: 128 00:04:13.287 EAL: Detected CPU lcores: 128 00:04:13.287 EAL: Detected NUMA nodes: 2 00:04:13.287 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:13.287 EAL: Detected shared linkage of DPDK 00:04:13.287 EAL: No shared files mode enabled, IPC will be disabled 00:04:13.287 EAL: Bus pci wants IOVA as 'DC' 00:04:13.287 EAL: Buses did not request a specific IOVA mode. 00:04:13.287 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:13.287 EAL: Selected IOVA mode 'VA' 00:04:13.287 EAL: Probing VFIO support... 00:04:13.287 EAL: IOMMU type 1 (Type 1) is supported 00:04:13.287 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:13.287 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:13.287 EAL: VFIO support initialized 00:04:13.287 EAL: Ask a virtual area of 0x2e000 bytes 00:04:13.287 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:13.287 EAL: Setting up physically contiguous memory... 00:04:13.287 EAL: Setting maximum number of open files to 524288 00:04:13.287 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:13.287 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:13.287 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:13.287 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:13.287 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.287 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:13.287 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.287 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.287 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:13.287 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:13.287 EAL: Hugepages will be freed exactly as allocated. 00:04:13.287 EAL: No shared files mode enabled, IPC is disabled 00:04:13.287 EAL: No shared files mode enabled, IPC is disabled 00:04:13.287 EAL: TSC frequency is ~2400000 KHz 00:04:13.287 EAL: Main lcore 0 is ready (tid=7fba53ab8a00;cpuset=[0]) 00:04:13.287 EAL: Trying to obtain current memory policy. 00:04:13.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.287 EAL: Restoring previous memory policy: 0 00:04:13.287 EAL: request: mp_malloc_sync 00:04:13.287 EAL: No shared files mode enabled, IPC is disabled 00:04:13.287 EAL: Heap on socket 0 was expanded by 2MB 00:04:13.287 EAL: No shared files mode enabled, IPC is disabled 00:04:13.287 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.287 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.287 00:04:13.287 00:04:13.287 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.287 http://cunit.sourceforge.net/ 00:04:13.287 00:04:13.287 00:04:13.287 Suite: components_suite 00:04:13.287 Test: vtophys_malloc_test ...passed 00:04:13.287 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.287 EAL: Restoring previous memory policy: 4 00:04:13.287 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.287 EAL: request: mp_malloc_sync 00:04:13.287 EAL: No shared files mode enabled, IPC is disabled 00:04:13.287 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.288 EAL: Trying to obtain current memory policy. 00:04:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.288 EAL: Restoring previous memory policy: 4 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.288 EAL: request: mp_malloc_sync 00:04:13.288 EAL: No shared files mode enabled, IPC is disabled 00:04:13.288 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.549 EAL: request: mp_malloc_sync 00:04:13.549 EAL: No shared files mode enabled, IPC is disabled 00:04:13.549 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.549 EAL: Trying to obtain current memory policy. 00:04:13.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.549 EAL: Restoring previous memory policy: 4 00:04:13.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.549 EAL: request: mp_malloc_sync 00:04:13.549 EAL: No shared files mode enabled, IPC is disabled 00:04:13.549 EAL: Heap on socket 0 was expanded by 514MB 00:04:13.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.549 EAL: request: mp_malloc_sync 00:04:13.549 EAL: No shared files mode enabled, IPC is disabled 00:04:13.549 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.549 EAL: Trying to obtain current memory policy. 00:04:13.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.809 EAL: Restoring previous memory policy: 4 00:04:13.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.809 EAL: request: mp_malloc_sync 00:04:13.809 EAL: No shared files mode enabled, IPC is disabled 00:04:13.809 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.071 EAL: request: mp_malloc_sync 00:04:14.071 EAL: No shared files mode enabled, IPC is disabled 00:04:14.071 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.071 passed 00:04:14.071 00:04:14.071 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.071 suites 1 1 n/a 0 0 00:04:14.071 tests 2 2 2 0 0 00:04:14.071 asserts 497 497 497 0 n/a 00:04:14.071 00:04:14.071 Elapsed time = 0.683 seconds 00:04:14.071 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.071 EAL: request: mp_malloc_sync 00:04:14.071 EAL: No shared files mode enabled, IPC is disabled 00:04:14.071 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.071 EAL: No shared files mode enabled, IPC is disabled 00:04:14.071 EAL: No shared files mode enabled, IPC is disabled 00:04:14.071 EAL: No shared files mode enabled, IPC is disabled 00:04:14.071 00:04:14.071 real 0m0.831s 00:04:14.071 user 0m0.434s 00:04:14.071 sys 0m0.372s 00:04:14.071 16:47:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.071 16:47:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.071 ************************************ 00:04:14.071 END TEST env_vtophys 00:04:14.071 ************************************ 00:04:14.071 16:47:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:14.071 16:47:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.071 16:47:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.071 16:47:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.071 ************************************ 00:04:14.071 START TEST env_pci 00:04:14.071 ************************************ 00:04:14.071 16:47:06 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:14.071 00:04:14.071 00:04:14.071 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.071 http://cunit.sourceforge.net/ 00:04:14.071 00:04:14.071 00:04:14.071 Suite: pci 00:04:14.071 Test: pci_hook ...[2024-11-20 16:47:06.108991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1719808 has claimed it 00:04:14.071 EAL: Cannot find device (10000:00:01.0) 00:04:14.071 EAL: Failed to attach device on primary process 00:04:14.071 passed 00:04:14.071 00:04:14.071 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.071 suites 1 1 n/a 0 0 00:04:14.071 tests 1 1 1 0 0 00:04:14.071 asserts 25 25 25 0 n/a 00:04:14.071 00:04:14.071 Elapsed time = 0.030 seconds 00:04:14.071 00:04:14.071 real 0m0.052s 00:04:14.071 user 0m0.017s 00:04:14.071 sys 0m0.035s 00:04:14.071 16:47:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.071 16:47:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.071 ************************************ 00:04:14.071 END TEST env_pci 00:04:14.071 ************************************ 00:04:14.071 16:47:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.071 16:47:06 env -- env/env.sh@15 -- # uname 00:04:14.071 16:47:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.072 16:47:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.072 16:47:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.072 16:47:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:14.072 16:47:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.072 16:47:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.072 ************************************ 00:04:14.072 START TEST env_dpdk_post_init 00:04:14.072 ************************************ 00:04:14.072 16:47:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.333 EAL: Detected CPU lcores: 128 00:04:14.333 EAL: Detected NUMA nodes: 2 00:04:14.333 EAL: Detected shared linkage of DPDK 00:04:14.333 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.333 EAL: Selected IOVA mode 'VA' 00:04:14.333 EAL: VFIO support initialized 00:04:14.333 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.333 EAL: Using IOMMU type 1 (Type 1) 00:04:14.333 EAL: Ignore mapping IO port bar(1) 00:04:14.593 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:14.593 EAL: Ignore mapping IO port bar(1) 00:04:14.853 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:14.853 EAL: Ignore mapping IO port bar(1) 00:04:15.114 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:15.114 EAL: Ignore mapping IO port bar(1) 00:04:15.114 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:15.374 EAL: Ignore mapping IO port bar(1) 00:04:15.374 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:15.635 EAL: Ignore mapping IO port bar(1) 00:04:15.635 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:15.895 EAL: Ignore mapping IO port bar(1) 00:04:15.895 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:15.895 EAL: Ignore mapping IO port bar(1) 00:04:16.157 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:16.418 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:16.418 EAL: Ignore mapping IO port bar(1) 00:04:16.679 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:16.679 EAL: Ignore mapping IO port bar(1) 00:04:16.679 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:16.939 EAL: Ignore mapping IO port bar(1) 00:04:16.939 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:17.201 EAL: Ignore mapping IO port bar(1) 00:04:17.201 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:17.465 EAL: Ignore mapping IO port bar(1) 00:04:17.465 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:17.465 EAL: Ignore mapping IO port bar(1) 00:04:17.726 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:17.726 EAL: Ignore mapping IO port bar(1) 00:04:18.009 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:18.009 EAL: Ignore mapping IO port bar(1) 00:04:18.269 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:18.269 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:18.269 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:18.269 Starting DPDK initialization... 00:04:18.269 Starting SPDK post initialization... 00:04:18.269 SPDK NVMe probe 00:04:18.269 Attaching to 0000:65:00.0 00:04:18.269 Attached to 0000:65:00.0 00:04:18.269 Cleaning up... 00:04:20.181 00:04:20.181 real 0m5.746s 00:04:20.181 user 0m0.107s 00:04:20.181 sys 0m0.195s 00:04:20.181 16:47:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.181 16:47:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.181 ************************************ 00:04:20.181 END TEST env_dpdk_post_init 00:04:20.181 ************************************ 00:04:20.181 16:47:12 env -- env/env.sh@26 -- # uname 00:04:20.181 16:47:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.181 16:47:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.181 16:47:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.181 16:47:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.181 16:47:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.181 ************************************ 00:04:20.181 START TEST env_mem_callbacks 00:04:20.181 ************************************ 00:04:20.181 16:47:12 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.181 EAL: Detected CPU lcores: 128 00:04:20.181 EAL: Detected NUMA nodes: 2 00:04:20.182 EAL: Detected shared linkage of DPDK 00:04:20.182 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.182 EAL: Selected IOVA mode 'VA' 00:04:20.182 EAL: VFIO support initialized 00:04:20.182 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.182 00:04:20.182 00:04:20.182 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.182 http://cunit.sourceforge.net/ 00:04:20.182 00:04:20.182 00:04:20.182 Suite: memory 00:04:20.182 Test: test ... 00:04:20.182 register 0x200000200000 2097152 00:04:20.182 malloc 3145728 00:04:20.182 register 0x200000400000 4194304 00:04:20.182 buf 0x200000500000 len 3145728 PASSED 00:04:20.182 malloc 64 00:04:20.182 buf 0x2000004fff40 len 64 PASSED 00:04:20.182 malloc 4194304 00:04:20.182 register 0x200000800000 6291456 00:04:20.182 buf 0x200000a00000 len 4194304 PASSED 00:04:20.182 free 0x200000500000 3145728 00:04:20.182 free 0x2000004fff40 64 00:04:20.182 unregister 0x200000400000 4194304 PASSED 00:04:20.182 free 0x200000a00000 4194304 00:04:20.182 unregister 0x200000800000 6291456 PASSED 00:04:20.182 malloc 8388608 00:04:20.182 register 0x200000400000 10485760 00:04:20.182 buf 0x200000600000 len 8388608 PASSED 00:04:20.182 free 0x200000600000 8388608 00:04:20.182 unregister 0x200000400000 10485760 PASSED 00:04:20.182 passed 00:04:20.182 00:04:20.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.182 suites 1 1 n/a 0 0 00:04:20.182 tests 1 1 1 0 0 00:04:20.182 asserts 15 15 15 0 n/a 00:04:20.182 00:04:20.182 Elapsed time = 0.010 seconds 00:04:20.182 00:04:20.182 real 0m0.069s 00:04:20.182 user 0m0.019s 00:04:20.182 sys 0m0.050s 00:04:20.182 16:47:12 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.182 16:47:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:20.182 ************************************ 00:04:20.182 END TEST env_mem_callbacks 00:04:20.182 ************************************ 00:04:20.182 00:04:20.182 real 0m7.519s 00:04:20.182 user 0m1.047s 00:04:20.182 sys 0m1.040s 00:04:20.182 16:47:12 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.182 16:47:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.182 ************************************ 00:04:20.182 END TEST env 00:04:20.182 ************************************ 00:04:20.182 16:47:12 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:20.182 16:47:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.182 16:47:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.182 16:47:12 -- common/autotest_common.sh@10 -- # set +x 00:04:20.182 ************************************ 00:04:20.182 START TEST rpc 00:04:20.182 ************************************ 00:04:20.182 16:47:12 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:20.182 * Looking for test storage... 00:04:20.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.443 16:47:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.443 16:47:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.443 16:47:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.443 16:47:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.443 16:47:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.443 16:47:12 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.443 16:47:12 rpc -- scripts/common.sh@345 -- # : 1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.443 16:47:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.443 16:47:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.443 16:47:12 rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.443 16:47:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.443 16:47:12 rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.443 16:47:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.443 16:47:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.443 16:47:12 rpc -- scripts/common.sh@368 -- # return 0 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.443 --rc genhtml_branch_coverage=1 00:04:20.443 --rc genhtml_function_coverage=1 00:04:20.443 --rc genhtml_legend=1 00:04:20.443 --rc geninfo_all_blocks=1 00:04:20.443 --rc geninfo_unexecuted_blocks=1 00:04:20.443 00:04:20.443 ' 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.443 --rc genhtml_branch_coverage=1 00:04:20.443 --rc genhtml_function_coverage=1 00:04:20.443 --rc genhtml_legend=1 00:04:20.443 --rc geninfo_all_blocks=1 00:04:20.443 --rc geninfo_unexecuted_blocks=1 00:04:20.443 00:04:20.443 ' 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.443 --rc genhtml_branch_coverage=1 00:04:20.443 --rc genhtml_function_coverage=1 00:04:20.443 --rc genhtml_legend=1 00:04:20.443 --rc geninfo_all_blocks=1 00:04:20.443 --rc geninfo_unexecuted_blocks=1 00:04:20.443 00:04:20.443 ' 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.443 --rc genhtml_branch_coverage=1 00:04:20.443 --rc genhtml_function_coverage=1 00:04:20.443 --rc genhtml_legend=1 00:04:20.443 --rc geninfo_all_blocks=1 00:04:20.443 --rc geninfo_unexecuted_blocks=1 00:04:20.443 00:04:20.443 ' 00:04:20.443 16:47:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1721183 00:04:20.443 16:47:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.443 16:47:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:20.443 16:47:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1721183 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@835 -- # '[' -z 1721183 ']' 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.443 16:47:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.443 [2024-11-20 16:47:12.521387] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:20.443 [2024-11-20 16:47:12.521461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721183 ] 00:04:20.443 [2024-11-20 16:47:12.613990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.704 [2024-11-20 16:47:12.665865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:20.704 [2024-11-20 16:47:12.665922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1721183' to capture a snapshot of events at runtime. 00:04:20.704 [2024-11-20 16:47:12.665930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:20.704 [2024-11-20 16:47:12.665938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:20.704 [2024-11-20 16:47:12.665945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1721183 for offline analysis/debug. 00:04:20.704 [2024-11-20 16:47:12.666767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.276 16:47:13 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.276 16:47:13 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:21.276 16:47:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:21.276 16:47:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:21.276 16:47:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:21.276 16:47:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:21.276 16:47:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.276 16:47:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.276 16:47:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.276 ************************************ 00:04:21.276 START TEST rpc_integrity 00:04:21.276 ************************************ 00:04:21.276 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.276 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.276 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.276 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.276 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.276 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.276 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.276 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.276 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.276 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.276 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.536 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.536 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:21.536 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.536 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.536 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.536 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.536 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.536 { 00:04:21.536 "name": "Malloc0", 00:04:21.536 "aliases": [ 00:04:21.536 "8e05cb1e-ef0e-4eed-bced-93809ba328b6" 00:04:21.536 ], 00:04:21.536 "product_name": "Malloc disk", 00:04:21.536 "block_size": 512, 00:04:21.536 "num_blocks": 16384, 00:04:21.536 "uuid": "8e05cb1e-ef0e-4eed-bced-93809ba328b6", 00:04:21.536 "assigned_rate_limits": { 00:04:21.536 "rw_ios_per_sec": 0, 00:04:21.536 "rw_mbytes_per_sec": 0, 00:04:21.536 "r_mbytes_per_sec": 0, 00:04:21.536 "w_mbytes_per_sec": 0 00:04:21.536 }, 00:04:21.536 "claimed": false, 00:04:21.536 "zoned": false, 00:04:21.536 "supported_io_types": { 00:04:21.536 "read": true, 00:04:21.536 "write": true, 00:04:21.536 "unmap": true, 00:04:21.536 "flush": true, 00:04:21.536 "reset": true, 00:04:21.536 "nvme_admin": false, 00:04:21.536 "nvme_io": false, 00:04:21.536 "nvme_io_md": false, 00:04:21.536 "write_zeroes": true, 00:04:21.536 "zcopy": true, 00:04:21.536 "get_zone_info": false, 00:04:21.536 "zone_management": false, 00:04:21.536 "zone_append": false, 00:04:21.536 "compare": false, 00:04:21.536 "compare_and_write": false, 00:04:21.536 "abort": true, 00:04:21.536 "seek_hole": false, 00:04:21.536 "seek_data": false, 00:04:21.536 "copy": true, 00:04:21.536 "nvme_iov_md": false 00:04:21.536 }, 00:04:21.536 "memory_domains": [ 00:04:21.536 { 00:04:21.536 "dma_device_id": "system", 00:04:21.536 "dma_device_type": 1 00:04:21.536 }, 00:04:21.536 { 00:04:21.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.536 "dma_device_type": 2 00:04:21.536 } 00:04:21.536 ], 00:04:21.536 "driver_specific": {} 00:04:21.536 } 00:04:21.536 ]' 00:04:21.536 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.537 [2024-11-20 16:47:13.526101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:21.537 [2024-11-20 16:47:13.526148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.537 [2024-11-20 16:47:13.526180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1034800 00:04:21.537 [2024-11-20 16:47:13.526189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.537 [2024-11-20 16:47:13.527760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.537 [2024-11-20 16:47:13.527796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.537 Passthru0 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.537 { 00:04:21.537 "name": "Malloc0", 00:04:21.537 "aliases": [ 00:04:21.537 "8e05cb1e-ef0e-4eed-bced-93809ba328b6" 00:04:21.537 ], 00:04:21.537 "product_name": "Malloc disk", 00:04:21.537 "block_size": 512, 00:04:21.537 "num_blocks": 16384, 00:04:21.537 "uuid": "8e05cb1e-ef0e-4eed-bced-93809ba328b6", 00:04:21.537 "assigned_rate_limits": { 00:04:21.537 "rw_ios_per_sec": 0, 00:04:21.537 "rw_mbytes_per_sec": 0, 00:04:21.537 "r_mbytes_per_sec": 0, 00:04:21.537 "w_mbytes_per_sec": 0 00:04:21.537 }, 00:04:21.537 "claimed": true, 00:04:21.537 "claim_type": "exclusive_write", 00:04:21.537 "zoned": false, 00:04:21.537 "supported_io_types": { 00:04:21.537 "read": true, 00:04:21.537 "write": true, 00:04:21.537 "unmap": true, 00:04:21.537 "flush": true, 00:04:21.537 "reset": true, 00:04:21.537 "nvme_admin": false, 00:04:21.537 "nvme_io": false, 00:04:21.537 "nvme_io_md": false, 00:04:21.537 "write_zeroes": true, 00:04:21.537 "zcopy": true, 00:04:21.537 "get_zone_info": false, 00:04:21.537 "zone_management": false, 00:04:21.537 "zone_append": false, 00:04:21.537 "compare": false, 00:04:21.537 "compare_and_write": false, 00:04:21.537 "abort": true, 00:04:21.537 "seek_hole": false, 00:04:21.537 "seek_data": false, 00:04:21.537 "copy": true, 00:04:21.537 "nvme_iov_md": false 00:04:21.537 }, 00:04:21.537 "memory_domains": [ 00:04:21.537 { 00:04:21.537 "dma_device_id": "system", 00:04:21.537 "dma_device_type": 1 00:04:21.537 }, 00:04:21.537 { 00:04:21.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.537 "dma_device_type": 2 00:04:21.537 } 00:04:21.537 ], 00:04:21.537 "driver_specific": {} 00:04:21.537 }, 00:04:21.537 { 00:04:21.537 "name": "Passthru0", 00:04:21.537 "aliases": [ 00:04:21.537 "6b15b919-4aff-5fff-8f09-27cf97f1c00d" 00:04:21.537 ], 00:04:21.537 "product_name": "passthru", 00:04:21.537 "block_size": 512, 00:04:21.537 "num_blocks": 16384, 00:04:21.537 "uuid": "6b15b919-4aff-5fff-8f09-27cf97f1c00d", 00:04:21.537 "assigned_rate_limits": { 00:04:21.537 "rw_ios_per_sec": 0, 00:04:21.537 "rw_mbytes_per_sec": 0, 00:04:21.537 "r_mbytes_per_sec": 0, 00:04:21.537 "w_mbytes_per_sec": 0 00:04:21.537 }, 00:04:21.537 "claimed": false, 00:04:21.537 "zoned": false, 00:04:21.537 "supported_io_types": { 00:04:21.537 "read": true, 00:04:21.537 "write": true, 00:04:21.537 "unmap": true, 00:04:21.537 "flush": true, 00:04:21.537 "reset": true, 00:04:21.537 "nvme_admin": false, 00:04:21.537 "nvme_io": false, 00:04:21.537 "nvme_io_md": false, 00:04:21.537 "write_zeroes": true, 00:04:21.537 "zcopy": true, 00:04:21.537 "get_zone_info": false, 00:04:21.537 "zone_management": false, 00:04:21.537 "zone_append": false, 00:04:21.537 "compare": false, 00:04:21.537 "compare_and_write": false, 00:04:21.537 "abort": true, 00:04:21.537 "seek_hole": false, 00:04:21.537 "seek_data": false, 00:04:21.537 "copy": true, 00:04:21.537 "nvme_iov_md": false 00:04:21.537 }, 00:04:21.537 "memory_domains": [ 00:04:21.537 { 00:04:21.537 "dma_device_id": "system", 00:04:21.537 "dma_device_type": 1 00:04:21.537 }, 00:04:21.537 { 00:04:21.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.537 "dma_device_type": 2 00:04:21.537 } 00:04:21.537 ], 00:04:21.537 "driver_specific": { 00:04:21.537 "passthru": { 00:04:21.537 "name": "Passthru0", 00:04:21.537 "base_bdev_name": "Malloc0" 00:04:21.537 } 00:04:21.537 } 00:04:21.537 } 00:04:21.537 ]' 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.537 16:47:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.537 00:04:21.537 real 0m0.300s 00:04:21.537 user 0m0.187s 00:04:21.537 sys 0m0.042s 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.537 16:47:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.537 ************************************ 00:04:21.537 END TEST rpc_integrity 00:04:21.537 ************************************ 00:04:21.799 16:47:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:21.799 16:47:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.799 16:47:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.799 16:47:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 ************************************ 00:04:21.799 START TEST rpc_plugins 00:04:21.799 ************************************ 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:21.799 { 00:04:21.799 "name": "Malloc1", 00:04:21.799 "aliases": [ 00:04:21.799 "084521f6-bd56-445e-8f47-fbc87b1b4825" 00:04:21.799 ], 00:04:21.799 "product_name": "Malloc disk", 00:04:21.799 "block_size": 4096, 00:04:21.799 "num_blocks": 256, 00:04:21.799 "uuid": "084521f6-bd56-445e-8f47-fbc87b1b4825", 00:04:21.799 "assigned_rate_limits": { 00:04:21.799 "rw_ios_per_sec": 0, 00:04:21.799 "rw_mbytes_per_sec": 0, 00:04:21.799 "r_mbytes_per_sec": 0, 00:04:21.799 "w_mbytes_per_sec": 0 00:04:21.799 }, 00:04:21.799 "claimed": false, 00:04:21.799 "zoned": false, 00:04:21.799 "supported_io_types": { 00:04:21.799 "read": true, 00:04:21.799 "write": true, 00:04:21.799 "unmap": true, 00:04:21.799 "flush": true, 00:04:21.799 "reset": true, 00:04:21.799 "nvme_admin": false, 00:04:21.799 "nvme_io": false, 00:04:21.799 "nvme_io_md": false, 00:04:21.799 "write_zeroes": true, 00:04:21.799 "zcopy": true, 00:04:21.799 "get_zone_info": false, 00:04:21.799 "zone_management": false, 00:04:21.799 "zone_append": false, 00:04:21.799 "compare": false, 00:04:21.799 "compare_and_write": false, 00:04:21.799 "abort": true, 00:04:21.799 "seek_hole": false, 00:04:21.799 "seek_data": false, 00:04:21.799 "copy": true, 00:04:21.799 "nvme_iov_md": false 00:04:21.799 }, 00:04:21.799 "memory_domains": [ 00:04:21.799 { 00:04:21.799 "dma_device_id": "system", 00:04:21.799 "dma_device_type": 1 00:04:21.799 }, 00:04:21.799 { 00:04:21.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.799 "dma_device_type": 2 00:04:21.799 } 00:04:21.799 ], 00:04:21.799 "driver_specific": {} 00:04:21.799 } 00:04:21.799 ]' 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.799 16:47:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.799 00:04:21.799 real 0m0.151s 00:04:21.799 user 0m0.094s 00:04:21.799 sys 0m0.021s 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.799 16:47:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 ************************************ 00:04:21.799 END TEST rpc_plugins 00:04:21.799 ************************************ 00:04:21.799 16:47:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.799 16:47:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.799 16:47:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.799 16:47:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.060 ************************************ 00:04:22.060 START TEST rpc_trace_cmd_test 00:04:22.060 ************************************ 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:22.060 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1721183", 00:04:22.060 "tpoint_group_mask": "0x8", 00:04:22.060 "iscsi_conn": { 00:04:22.060 "mask": "0x2", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "scsi": { 00:04:22.060 "mask": "0x4", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "bdev": { 00:04:22.060 "mask": "0x8", 00:04:22.060 "tpoint_mask": "0xffffffffffffffff" 00:04:22.060 }, 00:04:22.060 "nvmf_rdma": { 00:04:22.060 "mask": "0x10", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "nvmf_tcp": { 00:04:22.060 "mask": "0x20", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "ftl": { 00:04:22.060 "mask": "0x40", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "blobfs": { 00:04:22.060 "mask": "0x80", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "dsa": { 00:04:22.060 "mask": "0x200", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "thread": { 00:04:22.060 "mask": "0x400", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "nvme_pcie": { 00:04:22.060 "mask": "0x800", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "iaa": { 00:04:22.060 "mask": "0x1000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "nvme_tcp": { 00:04:22.060 "mask": "0x2000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "bdev_nvme": { 00:04:22.060 "mask": "0x4000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "sock": { 00:04:22.060 "mask": "0x8000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "blob": { 00:04:22.060 "mask": "0x10000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "bdev_raid": { 00:04:22.060 "mask": "0x20000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 }, 00:04:22.060 "scheduler": { 00:04:22.060 "mask": "0x40000", 00:04:22.060 "tpoint_mask": "0x0" 00:04:22.060 } 00:04:22.060 }' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:22.060 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:22.321 16:47:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:22.321 00:04:22.321 real 0m0.253s 00:04:22.321 user 0m0.203s 00:04:22.321 sys 0m0.043s 00:04:22.321 16:47:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.321 16:47:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:22.321 ************************************ 00:04:22.321 END TEST rpc_trace_cmd_test 00:04:22.321 ************************************ 00:04:22.321 16:47:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:22.321 16:47:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:22.321 16:47:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:22.321 16:47:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.322 16:47:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.322 16:47:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.322 ************************************ 00:04:22.322 START TEST rpc_daemon_integrity 00:04:22.322 ************************************ 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.322 { 00:04:22.322 "name": "Malloc2", 00:04:22.322 "aliases": [ 00:04:22.322 "68ad226d-0777-4f8b-ac3b-b8ba959e1760" 00:04:22.322 ], 00:04:22.322 "product_name": "Malloc disk", 00:04:22.322 "block_size": 512, 00:04:22.322 "num_blocks": 16384, 00:04:22.322 "uuid": "68ad226d-0777-4f8b-ac3b-b8ba959e1760", 00:04:22.322 "assigned_rate_limits": { 00:04:22.322 "rw_ios_per_sec": 0, 00:04:22.322 "rw_mbytes_per_sec": 0, 00:04:22.322 "r_mbytes_per_sec": 0, 00:04:22.322 "w_mbytes_per_sec": 0 00:04:22.322 }, 00:04:22.322 "claimed": false, 00:04:22.322 "zoned": false, 00:04:22.322 "supported_io_types": { 00:04:22.322 "read": true, 00:04:22.322 "write": true, 00:04:22.322 "unmap": true, 00:04:22.322 "flush": true, 00:04:22.322 "reset": true, 00:04:22.322 "nvme_admin": false, 00:04:22.322 "nvme_io": false, 00:04:22.322 "nvme_io_md": false, 00:04:22.322 "write_zeroes": true, 00:04:22.322 "zcopy": true, 00:04:22.322 "get_zone_info": false, 00:04:22.322 "zone_management": false, 00:04:22.322 "zone_append": false, 00:04:22.322 "compare": false, 00:04:22.322 "compare_and_write": false, 00:04:22.322 "abort": true, 00:04:22.322 "seek_hole": false, 00:04:22.322 "seek_data": false, 00:04:22.322 "copy": true, 00:04:22.322 "nvme_iov_md": false 00:04:22.322 }, 00:04:22.322 "memory_domains": [ 00:04:22.322 { 00:04:22.322 "dma_device_id": "system", 00:04:22.322 "dma_device_type": 1 00:04:22.322 }, 00:04:22.322 { 00:04:22.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.322 "dma_device_type": 2 00:04:22.322 } 00:04:22.322 ], 00:04:22.322 "driver_specific": {} 00:04:22.322 } 00:04:22.322 ]' 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.322 [2024-11-20 16:47:14.480694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:22.322 [2024-11-20 16:47:14.480738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.322 [2024-11-20 16:47:14.480753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef0fe0 00:04:22.322 [2024-11-20 16:47:14.480762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.322 [2024-11-20 16:47:14.482242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.322 [2024-11-20 16:47:14.482277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.322 Passthru0 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.322 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.584 { 00:04:22.584 "name": "Malloc2", 00:04:22.584 "aliases": [ 00:04:22.584 "68ad226d-0777-4f8b-ac3b-b8ba959e1760" 00:04:22.584 ], 00:04:22.584 "product_name": "Malloc disk", 00:04:22.584 "block_size": 512, 00:04:22.584 "num_blocks": 16384, 00:04:22.584 "uuid": "68ad226d-0777-4f8b-ac3b-b8ba959e1760", 00:04:22.584 "assigned_rate_limits": { 00:04:22.584 "rw_ios_per_sec": 0, 00:04:22.584 "rw_mbytes_per_sec": 0, 00:04:22.584 "r_mbytes_per_sec": 0, 00:04:22.584 "w_mbytes_per_sec": 0 00:04:22.584 }, 00:04:22.584 "claimed": true, 00:04:22.584 "claim_type": "exclusive_write", 00:04:22.584 "zoned": false, 00:04:22.584 "supported_io_types": { 00:04:22.584 "read": true, 00:04:22.584 "write": true, 00:04:22.584 "unmap": true, 00:04:22.584 "flush": true, 00:04:22.584 "reset": true, 00:04:22.584 "nvme_admin": false, 00:04:22.584 "nvme_io": false, 00:04:22.584 "nvme_io_md": false, 00:04:22.584 "write_zeroes": true, 00:04:22.584 "zcopy": true, 00:04:22.584 "get_zone_info": false, 00:04:22.584 "zone_management": false, 00:04:22.584 "zone_append": false, 00:04:22.584 "compare": false, 00:04:22.584 "compare_and_write": false, 00:04:22.584 "abort": true, 00:04:22.584 "seek_hole": false, 00:04:22.584 "seek_data": false, 00:04:22.584 "copy": true, 00:04:22.584 "nvme_iov_md": false 00:04:22.584 }, 00:04:22.584 "memory_domains": [ 00:04:22.584 { 00:04:22.584 "dma_device_id": "system", 00:04:22.584 "dma_device_type": 1 00:04:22.584 }, 00:04:22.584 { 00:04:22.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.584 "dma_device_type": 2 00:04:22.584 } 00:04:22.584 ], 00:04:22.584 "driver_specific": {} 00:04:22.584 }, 00:04:22.584 { 00:04:22.584 "name": "Passthru0", 00:04:22.584 "aliases": [ 00:04:22.584 "e0e913c0-5029-5401-a332-fb5c8b8ee95a" 00:04:22.584 ], 00:04:22.584 "product_name": "passthru", 00:04:22.584 "block_size": 512, 00:04:22.584 "num_blocks": 16384, 00:04:22.584 "uuid": "e0e913c0-5029-5401-a332-fb5c8b8ee95a", 00:04:22.584 "assigned_rate_limits": { 00:04:22.584 "rw_ios_per_sec": 0, 00:04:22.584 "rw_mbytes_per_sec": 0, 00:04:22.584 "r_mbytes_per_sec": 0, 00:04:22.584 "w_mbytes_per_sec": 0 00:04:22.584 }, 00:04:22.584 "claimed": false, 00:04:22.584 "zoned": false, 00:04:22.584 "supported_io_types": { 00:04:22.584 "read": true, 00:04:22.584 "write": true, 00:04:22.584 "unmap": true, 00:04:22.584 "flush": true, 00:04:22.584 "reset": true, 00:04:22.584 "nvme_admin": false, 00:04:22.584 "nvme_io": false, 00:04:22.584 "nvme_io_md": false, 00:04:22.584 "write_zeroes": true, 00:04:22.584 "zcopy": true, 00:04:22.584 "get_zone_info": false, 00:04:22.584 "zone_management": false, 00:04:22.584 "zone_append": false, 00:04:22.584 "compare": false, 00:04:22.584 "compare_and_write": false, 00:04:22.584 "abort": true, 00:04:22.584 "seek_hole": false, 00:04:22.584 "seek_data": false, 00:04:22.584 "copy": true, 00:04:22.584 "nvme_iov_md": false 00:04:22.584 }, 00:04:22.584 "memory_domains": [ 00:04:22.584 { 00:04:22.584 "dma_device_id": "system", 00:04:22.584 "dma_device_type": 1 00:04:22.584 }, 00:04:22.584 { 00:04:22.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.584 "dma_device_type": 2 00:04:22.584 } 00:04:22.584 ], 00:04:22.584 "driver_specific": { 00:04:22.584 "passthru": { 00:04:22.584 "name": "Passthru0", 00:04:22.584 "base_bdev_name": "Malloc2" 00:04:22.584 } 00:04:22.584 } 00:04:22.584 } 00:04:22.584 ]' 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.584 00:04:22.584 real 0m0.301s 00:04:22.584 user 0m0.184s 00:04:22.584 sys 0m0.048s 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.584 16:47:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.584 ************************************ 00:04:22.584 END TEST rpc_daemon_integrity 00:04:22.584 ************************************ 00:04:22.584 16:47:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:22.584 16:47:14 rpc -- rpc/rpc.sh@84 -- # killprocess 1721183 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@954 -- # '[' -z 1721183 ']' 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@958 -- # kill -0 1721183 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721183 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721183' 00:04:22.584 killing process with pid 1721183 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@973 -- # kill 1721183 00:04:22.584 16:47:14 rpc -- common/autotest_common.sh@978 -- # wait 1721183 00:04:22.846 00:04:22.846 real 0m2.733s 00:04:22.846 user 0m3.482s 00:04:22.846 sys 0m0.855s 00:04:22.846 16:47:14 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.846 16:47:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.846 ************************************ 00:04:22.846 END TEST rpc 00:04:22.846 ************************************ 00:04:23.107 16:47:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:23.107 16:47:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.107 16:47:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.107 16:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.107 ************************************ 00:04:23.107 START TEST skip_rpc 00:04:23.107 ************************************ 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:23.107 * Looking for test storage... 00:04:23.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.107 16:47:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.107 --rc genhtml_branch_coverage=1 00:04:23.107 --rc genhtml_function_coverage=1 00:04:23.107 --rc genhtml_legend=1 00:04:23.107 --rc geninfo_all_blocks=1 00:04:23.107 --rc geninfo_unexecuted_blocks=1 00:04:23.107 00:04:23.107 ' 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.107 --rc genhtml_branch_coverage=1 00:04:23.107 --rc genhtml_function_coverage=1 00:04:23.107 --rc genhtml_legend=1 00:04:23.107 --rc geninfo_all_blocks=1 00:04:23.107 --rc geninfo_unexecuted_blocks=1 00:04:23.107 00:04:23.107 ' 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.107 --rc genhtml_branch_coverage=1 00:04:23.107 --rc genhtml_function_coverage=1 00:04:23.107 --rc genhtml_legend=1 00:04:23.107 --rc geninfo_all_blocks=1 00:04:23.107 --rc geninfo_unexecuted_blocks=1 00:04:23.107 00:04:23.107 ' 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.107 --rc genhtml_branch_coverage=1 00:04:23.107 --rc genhtml_function_coverage=1 00:04:23.107 --rc genhtml_legend=1 00:04:23.107 --rc geninfo_all_blocks=1 00:04:23.107 --rc geninfo_unexecuted_blocks=1 00:04:23.107 00:04:23.107 ' 00:04:23.107 16:47:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.107 16:47:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:23.107 16:47:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.107 16:47:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.368 ************************************ 00:04:23.368 START TEST skip_rpc 00:04:23.368 ************************************ 00:04:23.368 16:47:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.368 16:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1722030 00:04:23.368 16:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.368 16:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.368 16:47:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.368 [2024-11-20 16:47:15.384236] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:23.368 [2024-11-20 16:47:15.384294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722030 ] 00:04:23.368 [2024-11-20 16:47:15.478627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.368 [2024-11-20 16:47:15.533430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1722030 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1722030 ']' 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1722030 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722030 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722030' 00:04:28.659 killing process with pid 1722030 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1722030 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1722030 00:04:28.659 00:04:28.659 real 0m5.268s 00:04:28.659 user 0m5.028s 00:04:28.659 sys 0m0.289s 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.659 16:47:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.659 ************************************ 00:04:28.659 END TEST skip_rpc 00:04:28.659 ************************************ 00:04:28.659 16:47:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:28.659 16:47:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.659 16:47:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.659 16:47:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.659 ************************************ 00:04:28.659 START TEST skip_rpc_with_json 00:04:28.659 ************************************ 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1723069 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1723069 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1723069 ']' 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.659 16:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.659 [2024-11-20 16:47:20.717259] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:28.659 [2024-11-20 16:47:20.717307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723069 ] 00:04:28.659 [2024-11-20 16:47:20.802664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.659 [2024-11-20 16:47:20.832301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.601 [2024-11-20 16:47:21.518186] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:29.601 request: 00:04:29.601 { 00:04:29.601 "trtype": "tcp", 00:04:29.601 "method": "nvmf_get_transports", 00:04:29.601 "req_id": 1 00:04:29.601 } 00:04:29.601 Got JSON-RPC error response 00:04:29.601 response: 00:04:29.601 { 00:04:29.601 "code": -19, 00:04:29.601 "message": "No such device" 00:04:29.601 } 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.601 [2024-11-20 16:47:21.530282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.601 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.601 { 00:04:29.601 "subsystems": [ 00:04:29.601 { 00:04:29.601 "subsystem": "fsdev", 00:04:29.601 "config": [ 00:04:29.601 { 00:04:29.601 "method": "fsdev_set_opts", 00:04:29.601 "params": { 00:04:29.601 "fsdev_io_pool_size": 65535, 00:04:29.601 "fsdev_io_cache_size": 256 00:04:29.601 } 00:04:29.601 } 00:04:29.601 ] 00:04:29.601 }, 00:04:29.601 { 00:04:29.601 "subsystem": "vfio_user_target", 00:04:29.601 "config": null 00:04:29.601 }, 00:04:29.601 { 00:04:29.601 "subsystem": "keyring", 00:04:29.601 "config": [] 00:04:29.601 }, 00:04:29.601 { 00:04:29.601 "subsystem": "iobuf", 00:04:29.601 "config": [ 00:04:29.601 { 00:04:29.601 "method": "iobuf_set_options", 00:04:29.601 "params": { 00:04:29.601 "small_pool_count": 8192, 00:04:29.601 "large_pool_count": 1024, 00:04:29.601 "small_bufsize": 8192, 00:04:29.601 "large_bufsize": 135168, 00:04:29.601 "enable_numa": false 00:04:29.601 } 00:04:29.601 } 00:04:29.601 ] 00:04:29.601 }, 00:04:29.601 { 00:04:29.601 "subsystem": "sock", 00:04:29.601 "config": [ 00:04:29.601 { 00:04:29.601 "method": "sock_set_default_impl", 00:04:29.601 "params": { 00:04:29.601 "impl_name": "posix" 00:04:29.601 } 00:04:29.601 }, 00:04:29.601 { 00:04:29.601 "method": "sock_impl_set_options", 00:04:29.601 "params": { 00:04:29.601 "impl_name": "ssl", 00:04:29.601 "recv_buf_size": 4096, 00:04:29.601 "send_buf_size": 4096, 00:04:29.601 "enable_recv_pipe": true, 00:04:29.601 "enable_quickack": false, 00:04:29.601 "enable_placement_id": 0, 00:04:29.601 "enable_zerocopy_send_server": true, 00:04:29.601 "enable_zerocopy_send_client": false, 00:04:29.601 "zerocopy_threshold": 0, 00:04:29.601 "tls_version": 0, 00:04:29.601 "enable_ktls": false 00:04:29.601 } 00:04:29.601 }, 00:04:29.601 { 00:04:29.601 "method": "sock_impl_set_options", 00:04:29.601 "params": { 00:04:29.601 "impl_name": "posix", 00:04:29.602 "recv_buf_size": 2097152, 00:04:29.602 "send_buf_size": 2097152, 00:04:29.602 "enable_recv_pipe": true, 00:04:29.602 "enable_quickack": false, 00:04:29.602 "enable_placement_id": 0, 00:04:29.602 "enable_zerocopy_send_server": true, 00:04:29.602 "enable_zerocopy_send_client": false, 00:04:29.602 "zerocopy_threshold": 0, 00:04:29.602 "tls_version": 0, 00:04:29.602 "enable_ktls": false 00:04:29.602 } 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "vmd", 00:04:29.602 "config": [] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "accel", 00:04:29.602 "config": [ 00:04:29.602 { 00:04:29.602 "method": "accel_set_options", 00:04:29.602 "params": { 00:04:29.602 "small_cache_size": 128, 00:04:29.602 "large_cache_size": 16, 00:04:29.602 "task_count": 2048, 00:04:29.602 "sequence_count": 2048, 00:04:29.602 "buf_count": 2048 00:04:29.602 } 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "bdev", 00:04:29.602 "config": [ 00:04:29.602 { 00:04:29.602 "method": "bdev_set_options", 00:04:29.602 "params": { 00:04:29.602 "bdev_io_pool_size": 65535, 00:04:29.602 "bdev_io_cache_size": 256, 00:04:29.602 "bdev_auto_examine": true, 00:04:29.602 "iobuf_small_cache_size": 128, 00:04:29.602 "iobuf_large_cache_size": 16 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "bdev_raid_set_options", 00:04:29.602 "params": { 00:04:29.602 "process_window_size_kb": 1024, 00:04:29.602 "process_max_bandwidth_mb_sec": 0 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "bdev_iscsi_set_options", 00:04:29.602 "params": { 00:04:29.602 "timeout_sec": 30 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "bdev_nvme_set_options", 00:04:29.602 "params": { 00:04:29.602 "action_on_timeout": "none", 00:04:29.602 "timeout_us": 0, 00:04:29.602 "timeout_admin_us": 0, 00:04:29.602 "keep_alive_timeout_ms": 10000, 00:04:29.602 "arbitration_burst": 0, 00:04:29.602 "low_priority_weight": 0, 00:04:29.602 "medium_priority_weight": 0, 00:04:29.602 "high_priority_weight": 0, 00:04:29.602 "nvme_adminq_poll_period_us": 10000, 00:04:29.602 "nvme_ioq_poll_period_us": 0, 00:04:29.602 "io_queue_requests": 0, 00:04:29.602 "delay_cmd_submit": true, 00:04:29.602 "transport_retry_count": 4, 00:04:29.602 "bdev_retry_count": 3, 00:04:29.602 "transport_ack_timeout": 0, 00:04:29.602 "ctrlr_loss_timeout_sec": 0, 00:04:29.602 "reconnect_delay_sec": 0, 00:04:29.602 "fast_io_fail_timeout_sec": 0, 00:04:29.602 "disable_auto_failback": false, 00:04:29.602 "generate_uuids": false, 00:04:29.602 "transport_tos": 0, 00:04:29.602 "nvme_error_stat": false, 00:04:29.602 "rdma_srq_size": 0, 00:04:29.602 "io_path_stat": false, 00:04:29.602 "allow_accel_sequence": false, 00:04:29.602 "rdma_max_cq_size": 0, 00:04:29.602 "rdma_cm_event_timeout_ms": 0, 00:04:29.602 "dhchap_digests": [ 00:04:29.602 "sha256", 00:04:29.602 "sha384", 00:04:29.602 "sha512" 00:04:29.602 ], 00:04:29.602 "dhchap_dhgroups": [ 00:04:29.602 "null", 00:04:29.602 "ffdhe2048", 00:04:29.602 "ffdhe3072", 00:04:29.602 "ffdhe4096", 00:04:29.602 "ffdhe6144", 00:04:29.602 "ffdhe8192" 00:04:29.602 ] 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "bdev_nvme_set_hotplug", 00:04:29.602 "params": { 00:04:29.602 "period_us": 100000, 00:04:29.602 "enable": false 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "bdev_wait_for_examine" 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "scsi", 00:04:29.602 "config": null 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "scheduler", 00:04:29.602 "config": [ 00:04:29.602 { 00:04:29.602 "method": "framework_set_scheduler", 00:04:29.602 "params": { 00:04:29.602 "name": "static" 00:04:29.602 } 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "vhost_scsi", 00:04:29.602 "config": [] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "vhost_blk", 00:04:29.602 "config": [] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "ublk", 00:04:29.602 "config": [] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "nbd", 00:04:29.602 "config": [] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "nvmf", 00:04:29.602 "config": [ 00:04:29.602 { 00:04:29.602 "method": "nvmf_set_config", 00:04:29.602 "params": { 00:04:29.602 "discovery_filter": "match_any", 00:04:29.602 "admin_cmd_passthru": { 00:04:29.602 "identify_ctrlr": false 00:04:29.602 }, 00:04:29.602 "dhchap_digests": [ 00:04:29.602 "sha256", 00:04:29.602 "sha384", 00:04:29.602 "sha512" 00:04:29.602 ], 00:04:29.602 "dhchap_dhgroups": [ 00:04:29.602 "null", 00:04:29.602 "ffdhe2048", 00:04:29.602 "ffdhe3072", 00:04:29.602 "ffdhe4096", 00:04:29.602 "ffdhe6144", 00:04:29.602 "ffdhe8192" 00:04:29.602 ] 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "nvmf_set_max_subsystems", 00:04:29.602 "params": { 00:04:29.602 "max_subsystems": 1024 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "nvmf_set_crdt", 00:04:29.602 "params": { 00:04:29.602 "crdt1": 0, 00:04:29.602 "crdt2": 0, 00:04:29.602 "crdt3": 0 00:04:29.602 } 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "method": "nvmf_create_transport", 00:04:29.602 "params": { 00:04:29.602 "trtype": "TCP", 00:04:29.602 "max_queue_depth": 128, 00:04:29.602 "max_io_qpairs_per_ctrlr": 127, 00:04:29.602 "in_capsule_data_size": 4096, 00:04:29.602 "max_io_size": 131072, 00:04:29.602 "io_unit_size": 131072, 00:04:29.602 "max_aq_depth": 128, 00:04:29.602 "num_shared_buffers": 511, 00:04:29.602 "buf_cache_size": 4294967295, 00:04:29.602 "dif_insert_or_strip": false, 00:04:29.602 "zcopy": false, 00:04:29.602 "c2h_success": true, 00:04:29.602 "sock_priority": 0, 00:04:29.602 "abort_timeout_sec": 1, 00:04:29.602 "ack_timeout": 0, 00:04:29.602 "data_wr_pool_size": 0 00:04:29.602 } 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 }, 00:04:29.602 { 00:04:29.602 "subsystem": "iscsi", 00:04:29.602 "config": [ 00:04:29.602 { 00:04:29.602 "method": "iscsi_set_options", 00:04:29.602 "params": { 00:04:29.602 "node_base": "iqn.2016-06.io.spdk", 00:04:29.602 "max_sessions": 128, 00:04:29.602 "max_connections_per_session": 2, 00:04:29.602 "max_queue_depth": 64, 00:04:29.602 "default_time2wait": 2, 00:04:29.602 "default_time2retain": 20, 00:04:29.602 "first_burst_length": 8192, 00:04:29.602 "immediate_data": true, 00:04:29.602 "allow_duplicated_isid": false, 00:04:29.602 "error_recovery_level": 0, 00:04:29.602 "nop_timeout": 60, 00:04:29.602 "nop_in_interval": 30, 00:04:29.602 "disable_chap": false, 00:04:29.602 "require_chap": false, 00:04:29.602 "mutual_chap": false, 00:04:29.602 "chap_group": 0, 00:04:29.602 "max_large_datain_per_connection": 64, 00:04:29.602 "max_r2t_per_connection": 4, 00:04:29.602 "pdu_pool_size": 36864, 00:04:29.602 "immediate_data_pool_size": 16384, 00:04:29.602 "data_out_pool_size": 2048 00:04:29.602 } 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 } 00:04:29.602 ] 00:04:29.602 } 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1723069 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1723069 ']' 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1723069 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.602 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723069 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723069' 00:04:29.862 killing process with pid 1723069 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1723069 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1723069 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1723409 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:29.862 16:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.142 16:47:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1723409 00:04:35.142 16:47:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1723409 ']' 00:04:35.142 16:47:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1723409 00:04:35.142 16:47:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:35.142 16:47:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.142 16:47:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723409 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723409' 00:04:35.142 killing process with pid 1723409 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1723409 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1723409 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.142 00:04:35.142 real 0m6.564s 00:04:35.142 user 0m6.482s 00:04:35.142 sys 0m0.561s 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.142 ************************************ 00:04:35.142 END TEST skip_rpc_with_json 00:04:35.142 ************************************ 00:04:35.142 16:47:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:35.142 16:47:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.142 16:47:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.142 16:47:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.142 ************************************ 00:04:35.142 START TEST skip_rpc_with_delay 00:04:35.142 ************************************ 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:35.142 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.404 [2024-11-20 16:47:27.363667] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:35.404 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:35.404 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.404 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.404 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.404 00:04:35.404 real 0m0.079s 00:04:35.404 user 0m0.048s 00:04:35.404 sys 0m0.030s 00:04:35.404 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.404 16:47:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:35.404 ************************************ 00:04:35.404 END TEST skip_rpc_with_delay 00:04:35.404 ************************************ 00:04:35.404 16:47:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:35.404 16:47:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:35.404 16:47:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:35.404 16:47:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.404 16:47:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.404 16:47:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.404 ************************************ 00:04:35.404 START TEST exit_on_failed_rpc_init 00:04:35.404 ************************************ 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1724480 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1724480 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1724480 ']' 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.404 16:47:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.404 [2024-11-20 16:47:27.523811] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:35.404 [2024-11-20 16:47:27.523871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724480 ] 00:04:35.664 [2024-11-20 16:47:27.612149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.664 [2024-11-20 16:47:27.647467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:36.233 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.233 [2024-11-20 16:47:28.373833] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:36.233 [2024-11-20 16:47:28.373887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724659 ] 00:04:36.494 [2024-11-20 16:47:28.459196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.494 [2024-11-20 16:47:28.495416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.494 [2024-11-20 16:47:28.495465] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:36.494 [2024-11-20 16:47:28.495474] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:36.494 [2024-11-20 16:47:28.495481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1724480 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1724480 ']' 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1724480 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1724480 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1724480' 00:04:36.494 killing process with pid 1724480 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1724480 00:04:36.494 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1724480 00:04:36.754 00:04:36.754 real 0m1.319s 00:04:36.754 user 0m1.528s 00:04:36.754 sys 0m0.399s 00:04:36.754 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.754 16:47:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.754 ************************************ 00:04:36.754 END TEST exit_on_failed_rpc_init 00:04:36.754 ************************************ 00:04:36.754 16:47:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.754 00:04:36.754 real 0m13.758s 00:04:36.754 user 0m13.321s 00:04:36.754 sys 0m1.600s 00:04:36.754 16:47:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.754 16:47:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.754 ************************************ 00:04:36.754 END TEST skip_rpc 00:04:36.754 ************************************ 00:04:36.754 16:47:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.754 16:47:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.754 16:47:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.754 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.754 ************************************ 00:04:36.754 START TEST rpc_client 00:04:36.754 ************************************ 00:04:36.754 16:47:28 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.015 * Looking for test storage... 00:04:37.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:37.015 16:47:28 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.015 16:47:28 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.015 16:47:28 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.015 16:47:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.015 --rc genhtml_branch_coverage=1 00:04:37.015 --rc genhtml_function_coverage=1 00:04:37.015 --rc genhtml_legend=1 00:04:37.015 --rc geninfo_all_blocks=1 00:04:37.015 --rc geninfo_unexecuted_blocks=1 00:04:37.015 00:04:37.015 ' 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.015 --rc genhtml_branch_coverage=1 00:04:37.015 --rc genhtml_function_coverage=1 00:04:37.015 --rc genhtml_legend=1 00:04:37.015 --rc geninfo_all_blocks=1 00:04:37.015 --rc geninfo_unexecuted_blocks=1 00:04:37.015 00:04:37.015 ' 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.015 --rc genhtml_branch_coverage=1 00:04:37.015 --rc genhtml_function_coverage=1 00:04:37.015 --rc genhtml_legend=1 00:04:37.015 --rc geninfo_all_blocks=1 00:04:37.015 --rc geninfo_unexecuted_blocks=1 00:04:37.015 00:04:37.015 ' 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.015 --rc genhtml_branch_coverage=1 00:04:37.015 --rc genhtml_function_coverage=1 00:04:37.015 --rc genhtml_legend=1 00:04:37.015 --rc geninfo_all_blocks=1 00:04:37.015 --rc geninfo_unexecuted_blocks=1 00:04:37.015 00:04:37.015 ' 00:04:37.015 16:47:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:37.015 OK 00:04:37.015 16:47:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.015 00:04:37.015 real 0m0.227s 00:04:37.015 user 0m0.138s 00:04:37.015 sys 0m0.102s 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.015 16:47:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:37.015 ************************************ 00:04:37.015 END TEST rpc_client 00:04:37.015 ************************************ 00:04:37.015 16:47:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.015 16:47:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.015 16:47:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.015 16:47:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.277 ************************************ 00:04:37.277 START TEST json_config 00:04:37.277 ************************************ 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.277 16:47:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.277 16:47:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.277 16:47:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.277 16:47:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.277 16:47:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.277 16:47:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:37.277 16:47:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.277 16:47:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.277 16:47:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.277 16:47:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.277 16:47:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.277 16:47:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.277 16:47:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.277 16:47:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.277 16:47:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.277 --rc genhtml_branch_coverage=1 00:04:37.277 --rc genhtml_function_coverage=1 00:04:37.277 --rc genhtml_legend=1 00:04:37.277 --rc geninfo_all_blocks=1 00:04:37.277 --rc geninfo_unexecuted_blocks=1 00:04:37.277 00:04:37.277 ' 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.277 --rc genhtml_branch_coverage=1 00:04:37.277 --rc genhtml_function_coverage=1 00:04:37.277 --rc genhtml_legend=1 00:04:37.277 --rc geninfo_all_blocks=1 00:04:37.277 --rc geninfo_unexecuted_blocks=1 00:04:37.277 00:04:37.277 ' 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.277 --rc genhtml_branch_coverage=1 00:04:37.277 --rc genhtml_function_coverage=1 00:04:37.277 --rc genhtml_legend=1 00:04:37.277 --rc geninfo_all_blocks=1 00:04:37.277 --rc geninfo_unexecuted_blocks=1 00:04:37.277 00:04:37.277 ' 00:04:37.277 16:47:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.277 --rc genhtml_branch_coverage=1 00:04:37.277 --rc genhtml_function_coverage=1 00:04:37.277 --rc genhtml_legend=1 00:04:37.277 --rc geninfo_all_blocks=1 00:04:37.277 --rc geninfo_unexecuted_blocks=1 00:04:37.277 00:04:37.277 ' 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.277 16:47:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.277 16:47:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.277 16:47:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.277 16:47:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.277 16:47:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.277 16:47:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.277 16:47:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.277 16:47:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:37.277 16:47:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.277 16:47:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:37.277 16:47:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:37.278 16:47:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.278 16:47:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:37.278 INFO: JSON configuration test init 00:04:37.278 16:47:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:37.278 16:47:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.278 16:47:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.278 16:47:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:37.278 16:47:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.278 16:47:29 json_config -- json_config/common.sh@10 -- # shift 00:04:37.278 16:47:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.278 16:47:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.278 16:47:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.278 16:47:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.278 16:47:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.278 16:47:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1724946 00:04:37.278 16:47:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.278 Waiting for target to run... 00:04:37.278 16:47:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1724946 /var/tmp/spdk_tgt.sock 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 1724946 ']' 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.278 16:47:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.278 16:47:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.538 [2024-11-20 16:47:29.485834] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:37.538 [2024-11-20 16:47:29.485897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724946 ] 00:04:37.797 [2024-11-20 16:47:29.718959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.797 [2024-11-20 16:47:29.742234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.368 16:47:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.368 16:47:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:38.368 16:47:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.368 00:04:38.368 16:47:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:38.368 16:47:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:38.368 16:47:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.368 16:47:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.368 16:47:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:38.368 16:47:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:38.368 16:47:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.368 16:47:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.368 16:47:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:38.368 16:47:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:38.368 16:47:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:38.938 16:47:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.938 16:47:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:38.938 16:47:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:38.938 16:47:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@54 -- # sort 00:04:38.938 16:47:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:38.939 16:47:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.939 16:47:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:38.939 16:47:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:38.939 16:47:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.939 16:47:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.199 16:47:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:39.199 16:47:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:39.199 16:47:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:39.199 16:47:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.199 16:47:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.199 MallocForNvmf0 00:04:39.199 16:47:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.199 16:47:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.458 MallocForNvmf1 00:04:39.458 16:47:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.458 16:47:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.718 [2024-11-20 16:47:31.637381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.718 16:47:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:39.718 16:47:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:39.718 16:47:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:39.718 16:47:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:39.979 16:47:32 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:39.979 16:47:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.240 16:47:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.240 16:47:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.240 [2024-11-20 16:47:32.367581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:40.240 16:47:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:40.240 16:47:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.240 16:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.501 16:47:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:40.501 16:47:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.501 16:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.501 16:47:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:40.501 16:47:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:40.501 16:47:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:40.501 MallocBdevForConfigChangeCheck 00:04:40.501 16:47:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:40.501 16:47:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.501 16:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.761 16:47:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:40.761 16:47:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.020 16:47:33 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:41.020 INFO: shutting down applications... 00:04:41.020 16:47:33 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:41.020 16:47:33 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:41.020 16:47:33 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:41.020 16:47:33 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.280 Calling clear_iscsi_subsystem 00:04:41.280 Calling clear_nvmf_subsystem 00:04:41.280 Calling clear_nbd_subsystem 00:04:41.280 Calling clear_ublk_subsystem 00:04:41.280 Calling clear_vhost_blk_subsystem 00:04:41.280 Calling clear_vhost_scsi_subsystem 00:04:41.280 Calling clear_bdev_subsystem 00:04:41.280 16:47:33 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:41.280 16:47:33 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:41.280 16:47:33 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:41.280 16:47:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.280 16:47:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.540 16:47:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.800 16:47:33 json_config -- json_config/json_config.sh@352 -- # break 00:04:41.800 16:47:33 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:41.800 16:47:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:41.800 16:47:33 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.800 16:47:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.800 16:47:33 json_config -- json_config/common.sh@35 -- # [[ -n 1724946 ]] 00:04:41.800 16:47:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1724946 00:04:41.800 16:47:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.800 16:47:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.800 16:47:33 json_config -- json_config/common.sh@41 -- # kill -0 1724946 00:04:41.801 16:47:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.371 16:47:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.371 16:47:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.371 16:47:34 json_config -- json_config/common.sh@41 -- # kill -0 1724946 00:04:42.371 16:47:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.371 16:47:34 json_config -- json_config/common.sh@43 -- # break 00:04:42.371 16:47:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.371 16:47:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.371 SPDK target shutdown done 00:04:42.371 16:47:34 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:42.371 INFO: relaunching applications... 00:04:42.371 16:47:34 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.371 16:47:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.371 16:47:34 json_config -- json_config/common.sh@10 -- # shift 00:04:42.371 16:47:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.371 16:47:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.371 16:47:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.371 16:47:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.371 16:47:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.371 16:47:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1726086 00:04:42.371 16:47:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.371 Waiting for target to run... 00:04:42.371 16:47:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1726086 /var/tmp/spdk_tgt.sock 00:04:42.371 16:47:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.371 16:47:34 json_config -- common/autotest_common.sh@835 -- # '[' -z 1726086 ']' 00:04:42.371 16:47:34 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.372 16:47:34 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.372 16:47:34 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.372 16:47:34 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.372 16:47:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.372 [2024-11-20 16:47:34.364195] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:42.372 [2024-11-20 16:47:34.364252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726086 ] 00:04:42.632 [2024-11-20 16:47:34.661888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.632 [2024-11-20 16:47:34.687648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.200 [2024-11-20 16:47:35.187928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.200 [2024-11-20 16:47:35.220326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.200 16:47:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.200 16:47:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:43.200 16:47:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.200 00:04:43.201 16:47:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:43.201 16:47:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:43.201 INFO: Checking if target configuration is the same... 00:04:43.201 16:47:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.201 16:47:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:43.201 16:47:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.201 + '[' 2 -ne 2 ']' 00:04:43.201 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.201 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:43.201 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.201 +++ basename /dev/fd/62 00:04:43.201 ++ mktemp /tmp/62.XXX 00:04:43.201 + tmp_file_1=/tmp/62.PrZ 00:04:43.201 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.201 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.201 + tmp_file_2=/tmp/spdk_tgt_config.json.66q 00:04:43.201 + ret=0 00:04:43.201 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.460 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.721 + diff -u /tmp/62.PrZ /tmp/spdk_tgt_config.json.66q 00:04:43.721 + echo 'INFO: JSON config files are the same' 00:04:43.721 INFO: JSON config files are the same 00:04:43.721 + rm /tmp/62.PrZ /tmp/spdk_tgt_config.json.66q 00:04:43.721 + exit 0 00:04:43.721 16:47:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:43.721 16:47:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:43.721 INFO: changing configuration and checking if this can be detected... 00:04:43.721 16:47:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.721 16:47:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.721 16:47:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.721 16:47:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:43.721 16:47:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.721 + '[' 2 -ne 2 ']' 00:04:43.721 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.721 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:43.721 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.721 +++ basename /dev/fd/62 00:04:43.721 ++ mktemp /tmp/62.XXX 00:04:43.721 + tmp_file_1=/tmp/62.LOi 00:04:43.721 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.721 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.721 + tmp_file_2=/tmp/spdk_tgt_config.json.vOA 00:04:43.721 + ret=0 00:04:43.721 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.291 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.291 + diff -u /tmp/62.LOi /tmp/spdk_tgt_config.json.vOA 00:04:44.291 + ret=1 00:04:44.291 + echo '=== Start of file: /tmp/62.LOi ===' 00:04:44.291 + cat /tmp/62.LOi 00:04:44.291 + echo '=== End of file: /tmp/62.LOi ===' 00:04:44.291 + echo '' 00:04:44.291 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vOA ===' 00:04:44.291 + cat /tmp/spdk_tgt_config.json.vOA 00:04:44.291 + echo '=== End of file: /tmp/spdk_tgt_config.json.vOA ===' 00:04:44.291 + echo '' 00:04:44.291 + rm /tmp/62.LOi /tmp/spdk_tgt_config.json.vOA 00:04:44.291 + exit 1 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:44.291 INFO: configuration change detected. 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 1726086 ]] 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.291 16:47:36 json_config -- json_config/json_config.sh@330 -- # killprocess 1726086 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@954 -- # '[' -z 1726086 ']' 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@958 -- # kill -0 1726086 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@959 -- # uname 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1726086 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1726086' 00:04:44.291 killing process with pid 1726086 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@973 -- # kill 1726086 00:04:44.291 16:47:36 json_config -- common/autotest_common.sh@978 -- # wait 1726086 00:04:44.552 16:47:36 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.552 16:47:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:44.552 16:47:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.552 16:47:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.552 16:47:36 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:44.552 16:47:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:44.552 INFO: Success 00:04:44.552 00:04:44.552 real 0m7.449s 00:04:44.552 user 0m9.101s 00:04:44.552 sys 0m1.909s 00:04:44.552 16:47:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.552 16:47:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.552 ************************************ 00:04:44.552 END TEST json_config 00:04:44.552 ************************************ 00:04:44.552 16:47:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.552 16:47:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.552 16:47:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.552 16:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.814 ************************************ 00:04:44.814 START TEST json_config_extra_key 00:04:44.814 ************************************ 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.814 --rc genhtml_branch_coverage=1 00:04:44.814 --rc genhtml_function_coverage=1 00:04:44.814 --rc genhtml_legend=1 00:04:44.814 --rc geninfo_all_blocks=1 00:04:44.814 --rc geninfo_unexecuted_blocks=1 00:04:44.814 00:04:44.814 ' 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.814 --rc genhtml_branch_coverage=1 00:04:44.814 --rc genhtml_function_coverage=1 00:04:44.814 --rc genhtml_legend=1 00:04:44.814 --rc geninfo_all_blocks=1 00:04:44.814 --rc geninfo_unexecuted_blocks=1 00:04:44.814 00:04:44.814 ' 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.814 --rc genhtml_branch_coverage=1 00:04:44.814 --rc genhtml_function_coverage=1 00:04:44.814 --rc genhtml_legend=1 00:04:44.814 --rc geninfo_all_blocks=1 00:04:44.814 --rc geninfo_unexecuted_blocks=1 00:04:44.814 00:04:44.814 ' 00:04:44.814 16:47:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.814 --rc genhtml_branch_coverage=1 00:04:44.814 --rc genhtml_function_coverage=1 00:04:44.814 --rc genhtml_legend=1 00:04:44.814 --rc geninfo_all_blocks=1 00:04:44.814 --rc geninfo_unexecuted_blocks=1 00:04:44.814 00:04:44.814 ' 00:04:44.814 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.814 16:47:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.814 16:47:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.814 16:47:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.814 16:47:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.814 16:47:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.814 16:47:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.814 16:47:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.815 16:47:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.815 INFO: launching applications... 00:04:44.815 16:47:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1726860 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.815 Waiting for target to run... 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1726860 /var/tmp/spdk_tgt.sock 00:04:44.815 16:47:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1726860 ']' 00:04:44.815 16:47:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.815 16:47:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.815 16:47:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.815 16:47:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.815 16:47:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.815 16:47:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.075 [2024-11-20 16:47:36.999723] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:45.075 [2024-11-20 16:47:36.999793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726860 ] 00:04:45.336 [2024-11-20 16:47:37.351201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.336 [2024-11-20 16:47:37.376512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.908 16:47:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.908 16:47:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.908 00:04:45.908 16:47:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.908 INFO: shutting down applications... 00:04:45.908 16:47:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1726860 ]] 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1726860 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1726860 00:04:45.908 16:47:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1726860 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.170 16:47:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.170 SPDK target shutdown done 00:04:46.170 16:47:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.170 Success 00:04:46.170 00:04:46.170 real 0m1.585s 00:04:46.170 user 0m1.136s 00:04:46.170 sys 0m0.494s 00:04:46.170 16:47:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.170 16:47:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.170 ************************************ 00:04:46.170 END TEST json_config_extra_key 00:04:46.170 ************************************ 00:04:46.432 16:47:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.432 16:47:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.432 16:47:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.432 16:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:46.432 ************************************ 00:04:46.432 START TEST alias_rpc 00:04:46.432 ************************************ 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.432 * Looking for test storage... 00:04:46.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.432 16:47:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.432 --rc genhtml_branch_coverage=1 00:04:46.432 --rc genhtml_function_coverage=1 00:04:46.432 --rc genhtml_legend=1 00:04:46.432 --rc geninfo_all_blocks=1 00:04:46.432 --rc geninfo_unexecuted_blocks=1 00:04:46.432 00:04:46.432 ' 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.432 --rc genhtml_branch_coverage=1 00:04:46.432 --rc genhtml_function_coverage=1 00:04:46.432 --rc genhtml_legend=1 00:04:46.432 --rc geninfo_all_blocks=1 00:04:46.432 --rc geninfo_unexecuted_blocks=1 00:04:46.432 00:04:46.432 ' 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.432 --rc genhtml_branch_coverage=1 00:04:46.432 --rc genhtml_function_coverage=1 00:04:46.432 --rc genhtml_legend=1 00:04:46.432 --rc geninfo_all_blocks=1 00:04:46.432 --rc geninfo_unexecuted_blocks=1 00:04:46.432 00:04:46.432 ' 00:04:46.432 16:47:38 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.432 --rc genhtml_branch_coverage=1 00:04:46.432 --rc genhtml_function_coverage=1 00:04:46.432 --rc genhtml_legend=1 00:04:46.432 --rc geninfo_all_blocks=1 00:04:46.432 --rc geninfo_unexecuted_blocks=1 00:04:46.432 00:04:46.432 ' 00:04:46.432 16:47:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.432 16:47:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1727231 00:04:46.433 16:47:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1727231 00:04:46.433 16:47:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1727231 ']' 00:04:46.433 16:47:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.433 16:47:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.433 16:47:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.433 16:47:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.433 16:47:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.433 16:47:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.717 [2024-11-20 16:47:38.665431] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:46.717 [2024-11-20 16:47:38.665509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727231 ] 00:04:46.717 [2024-11-20 16:47:38.753242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.717 [2024-11-20 16:47:38.788069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.333 16:47:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.333 16:47:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.333 16:47:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:47.595 16:47:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1727231 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1727231 ']' 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1727231 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727231 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727231' 00:04:47.595 killing process with pid 1727231 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 1727231 00:04:47.595 16:47:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 1727231 00:04:47.856 00:04:47.856 real 0m1.496s 00:04:47.856 user 0m1.617s 00:04:47.856 sys 0m0.437s 00:04:47.856 16:47:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.856 16:47:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.856 ************************************ 00:04:47.856 END TEST alias_rpc 00:04:47.856 ************************************ 00:04:47.856 16:47:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:47.856 16:47:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.856 16:47:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.856 16:47:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.856 16:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.856 ************************************ 00:04:47.856 START TEST spdkcli_tcp 00:04:47.856 ************************************ 00:04:47.856 16:47:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:48.117 * Looking for test storage... 00:04:48.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.117 16:47:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.117 --rc genhtml_branch_coverage=1 00:04:48.117 --rc genhtml_function_coverage=1 00:04:48.117 --rc genhtml_legend=1 00:04:48.117 --rc geninfo_all_blocks=1 00:04:48.117 --rc geninfo_unexecuted_blocks=1 00:04:48.117 00:04:48.117 ' 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.117 --rc genhtml_branch_coverage=1 00:04:48.117 --rc genhtml_function_coverage=1 00:04:48.117 --rc genhtml_legend=1 00:04:48.117 --rc geninfo_all_blocks=1 00:04:48.117 --rc geninfo_unexecuted_blocks=1 00:04:48.117 00:04:48.117 ' 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.117 --rc genhtml_branch_coverage=1 00:04:48.117 --rc genhtml_function_coverage=1 00:04:48.117 --rc genhtml_legend=1 00:04:48.117 --rc geninfo_all_blocks=1 00:04:48.117 --rc geninfo_unexecuted_blocks=1 00:04:48.117 00:04:48.117 ' 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.117 --rc genhtml_branch_coverage=1 00:04:48.117 --rc genhtml_function_coverage=1 00:04:48.117 --rc genhtml_legend=1 00:04:48.117 --rc geninfo_all_blocks=1 00:04:48.117 --rc geninfo_unexecuted_blocks=1 00:04:48.117 00:04:48.117 ' 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.117 16:47:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1727569 00:04:48.117 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1727569 00:04:48.118 16:47:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1727569 ']' 00:04:48.118 16:47:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.118 16:47:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.118 16:47:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.118 16:47:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.118 16:47:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.118 16:47:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:48.118 [2024-11-20 16:47:40.238804] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:48.118 [2024-11-20 16:47:40.238879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727569 ] 00:04:48.378 [2024-11-20 16:47:40.329189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.378 [2024-11-20 16:47:40.371202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.378 [2024-11-20 16:47:40.371203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.950 16:47:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.950 16:47:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:48.950 16:47:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1727689 00:04:48.950 16:47:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.950 16:47:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:49.212 [ 00:04:49.212 "bdev_malloc_delete", 00:04:49.212 "bdev_malloc_create", 00:04:49.212 "bdev_null_resize", 00:04:49.212 "bdev_null_delete", 00:04:49.212 "bdev_null_create", 00:04:49.212 "bdev_nvme_cuse_unregister", 00:04:49.212 "bdev_nvme_cuse_register", 00:04:49.212 "bdev_opal_new_user", 00:04:49.212 "bdev_opal_set_lock_state", 00:04:49.212 "bdev_opal_delete", 00:04:49.212 "bdev_opal_get_info", 00:04:49.212 "bdev_opal_create", 00:04:49.212 "bdev_nvme_opal_revert", 00:04:49.212 "bdev_nvme_opal_init", 00:04:49.212 "bdev_nvme_send_cmd", 00:04:49.212 "bdev_nvme_set_keys", 00:04:49.212 "bdev_nvme_get_path_iostat", 00:04:49.212 "bdev_nvme_get_mdns_discovery_info", 00:04:49.212 "bdev_nvme_stop_mdns_discovery", 00:04:49.212 "bdev_nvme_start_mdns_discovery", 00:04:49.212 "bdev_nvme_set_multipath_policy", 00:04:49.212 "bdev_nvme_set_preferred_path", 00:04:49.212 "bdev_nvme_get_io_paths", 00:04:49.212 "bdev_nvme_remove_error_injection", 00:04:49.212 "bdev_nvme_add_error_injection", 00:04:49.212 "bdev_nvme_get_discovery_info", 00:04:49.212 "bdev_nvme_stop_discovery", 00:04:49.212 "bdev_nvme_start_discovery", 00:04:49.212 "bdev_nvme_get_controller_health_info", 00:04:49.212 "bdev_nvme_disable_controller", 00:04:49.212 "bdev_nvme_enable_controller", 00:04:49.212 "bdev_nvme_reset_controller", 00:04:49.212 "bdev_nvme_get_transport_statistics", 00:04:49.212 "bdev_nvme_apply_firmware", 00:04:49.212 "bdev_nvme_detach_controller", 00:04:49.212 "bdev_nvme_get_controllers", 00:04:49.212 "bdev_nvme_attach_controller", 00:04:49.212 "bdev_nvme_set_hotplug", 00:04:49.212 "bdev_nvme_set_options", 00:04:49.212 "bdev_passthru_delete", 00:04:49.212 "bdev_passthru_create", 00:04:49.212 "bdev_lvol_set_parent_bdev", 00:04:49.212 "bdev_lvol_set_parent", 00:04:49.212 "bdev_lvol_check_shallow_copy", 00:04:49.212 "bdev_lvol_start_shallow_copy", 00:04:49.212 "bdev_lvol_grow_lvstore", 00:04:49.212 "bdev_lvol_get_lvols", 00:04:49.212 "bdev_lvol_get_lvstores", 00:04:49.212 "bdev_lvol_delete", 00:04:49.212 "bdev_lvol_set_read_only", 00:04:49.212 "bdev_lvol_resize", 00:04:49.212 "bdev_lvol_decouple_parent", 00:04:49.212 "bdev_lvol_inflate", 00:04:49.212 "bdev_lvol_rename", 00:04:49.212 "bdev_lvol_clone_bdev", 00:04:49.213 "bdev_lvol_clone", 00:04:49.213 "bdev_lvol_snapshot", 00:04:49.213 "bdev_lvol_create", 00:04:49.213 "bdev_lvol_delete_lvstore", 00:04:49.213 "bdev_lvol_rename_lvstore", 00:04:49.213 "bdev_lvol_create_lvstore", 00:04:49.213 "bdev_raid_set_options", 00:04:49.213 "bdev_raid_remove_base_bdev", 00:04:49.213 "bdev_raid_add_base_bdev", 00:04:49.213 "bdev_raid_delete", 00:04:49.213 "bdev_raid_create", 00:04:49.213 "bdev_raid_get_bdevs", 00:04:49.213 "bdev_error_inject_error", 00:04:49.213 "bdev_error_delete", 00:04:49.213 "bdev_error_create", 00:04:49.213 "bdev_split_delete", 00:04:49.213 "bdev_split_create", 00:04:49.213 "bdev_delay_delete", 00:04:49.213 "bdev_delay_create", 00:04:49.213 "bdev_delay_update_latency", 00:04:49.213 "bdev_zone_block_delete", 00:04:49.213 "bdev_zone_block_create", 00:04:49.213 "blobfs_create", 00:04:49.213 "blobfs_detect", 00:04:49.213 "blobfs_set_cache_size", 00:04:49.213 "bdev_aio_delete", 00:04:49.213 "bdev_aio_rescan", 00:04:49.213 "bdev_aio_create", 00:04:49.213 "bdev_ftl_set_property", 00:04:49.213 "bdev_ftl_get_properties", 00:04:49.213 "bdev_ftl_get_stats", 00:04:49.213 "bdev_ftl_unmap", 00:04:49.213 "bdev_ftl_unload", 00:04:49.213 "bdev_ftl_delete", 00:04:49.213 "bdev_ftl_load", 00:04:49.213 "bdev_ftl_create", 00:04:49.213 "bdev_virtio_attach_controller", 00:04:49.213 "bdev_virtio_scsi_get_devices", 00:04:49.213 "bdev_virtio_detach_controller", 00:04:49.213 "bdev_virtio_blk_set_hotplug", 00:04:49.213 "bdev_iscsi_delete", 00:04:49.213 "bdev_iscsi_create", 00:04:49.213 "bdev_iscsi_set_options", 00:04:49.213 "accel_error_inject_error", 00:04:49.213 "ioat_scan_accel_module", 00:04:49.213 "dsa_scan_accel_module", 00:04:49.213 "iaa_scan_accel_module", 00:04:49.213 "vfu_virtio_create_fs_endpoint", 00:04:49.213 "vfu_virtio_create_scsi_endpoint", 00:04:49.213 "vfu_virtio_scsi_remove_target", 00:04:49.213 "vfu_virtio_scsi_add_target", 00:04:49.213 "vfu_virtio_create_blk_endpoint", 00:04:49.213 "vfu_virtio_delete_endpoint", 00:04:49.213 "keyring_file_remove_key", 00:04:49.213 "keyring_file_add_key", 00:04:49.213 "keyring_linux_set_options", 00:04:49.213 "fsdev_aio_delete", 00:04:49.213 "fsdev_aio_create", 00:04:49.213 "iscsi_get_histogram", 00:04:49.213 "iscsi_enable_histogram", 00:04:49.213 "iscsi_set_options", 00:04:49.213 "iscsi_get_auth_groups", 00:04:49.213 "iscsi_auth_group_remove_secret", 00:04:49.213 "iscsi_auth_group_add_secret", 00:04:49.213 "iscsi_delete_auth_group", 00:04:49.213 "iscsi_create_auth_group", 00:04:49.213 "iscsi_set_discovery_auth", 00:04:49.213 "iscsi_get_options", 00:04:49.213 "iscsi_target_node_request_logout", 00:04:49.213 "iscsi_target_node_set_redirect", 00:04:49.213 "iscsi_target_node_set_auth", 00:04:49.213 "iscsi_target_node_add_lun", 00:04:49.213 "iscsi_get_stats", 00:04:49.213 "iscsi_get_connections", 00:04:49.213 "iscsi_portal_group_set_auth", 00:04:49.213 "iscsi_start_portal_group", 00:04:49.213 "iscsi_delete_portal_group", 00:04:49.213 "iscsi_create_portal_group", 00:04:49.213 "iscsi_get_portal_groups", 00:04:49.213 "iscsi_delete_target_node", 00:04:49.213 "iscsi_target_node_remove_pg_ig_maps", 00:04:49.213 "iscsi_target_node_add_pg_ig_maps", 00:04:49.213 "iscsi_create_target_node", 00:04:49.213 "iscsi_get_target_nodes", 00:04:49.213 "iscsi_delete_initiator_group", 00:04:49.213 "iscsi_initiator_group_remove_initiators", 00:04:49.213 "iscsi_initiator_group_add_initiators", 00:04:49.213 "iscsi_create_initiator_group", 00:04:49.213 "iscsi_get_initiator_groups", 00:04:49.213 "nvmf_set_crdt", 00:04:49.213 "nvmf_set_config", 00:04:49.213 "nvmf_set_max_subsystems", 00:04:49.213 "nvmf_stop_mdns_prr", 00:04:49.213 "nvmf_publish_mdns_prr", 00:04:49.213 "nvmf_subsystem_get_listeners", 00:04:49.213 "nvmf_subsystem_get_qpairs", 00:04:49.213 "nvmf_subsystem_get_controllers", 00:04:49.213 "nvmf_get_stats", 00:04:49.213 "nvmf_get_transports", 00:04:49.213 "nvmf_create_transport", 00:04:49.213 "nvmf_get_targets", 00:04:49.213 "nvmf_delete_target", 00:04:49.213 "nvmf_create_target", 00:04:49.213 "nvmf_subsystem_allow_any_host", 00:04:49.213 "nvmf_subsystem_set_keys", 00:04:49.213 "nvmf_subsystem_remove_host", 00:04:49.213 "nvmf_subsystem_add_host", 00:04:49.213 "nvmf_ns_remove_host", 00:04:49.213 "nvmf_ns_add_host", 00:04:49.213 "nvmf_subsystem_remove_ns", 00:04:49.213 "nvmf_subsystem_set_ns_ana_group", 00:04:49.213 "nvmf_subsystem_add_ns", 00:04:49.213 "nvmf_subsystem_listener_set_ana_state", 00:04:49.213 "nvmf_discovery_get_referrals", 00:04:49.213 "nvmf_discovery_remove_referral", 00:04:49.213 "nvmf_discovery_add_referral", 00:04:49.213 "nvmf_subsystem_remove_listener", 00:04:49.213 "nvmf_subsystem_add_listener", 00:04:49.213 "nvmf_delete_subsystem", 00:04:49.213 "nvmf_create_subsystem", 00:04:49.213 "nvmf_get_subsystems", 00:04:49.213 "env_dpdk_get_mem_stats", 00:04:49.213 "nbd_get_disks", 00:04:49.213 "nbd_stop_disk", 00:04:49.213 "nbd_start_disk", 00:04:49.213 "ublk_recover_disk", 00:04:49.213 "ublk_get_disks", 00:04:49.213 "ublk_stop_disk", 00:04:49.213 "ublk_start_disk", 00:04:49.213 "ublk_destroy_target", 00:04:49.213 "ublk_create_target", 00:04:49.213 "virtio_blk_create_transport", 00:04:49.213 "virtio_blk_get_transports", 00:04:49.213 "vhost_controller_set_coalescing", 00:04:49.213 "vhost_get_controllers", 00:04:49.213 "vhost_delete_controller", 00:04:49.213 "vhost_create_blk_controller", 00:04:49.213 "vhost_scsi_controller_remove_target", 00:04:49.213 "vhost_scsi_controller_add_target", 00:04:49.213 "vhost_start_scsi_controller", 00:04:49.213 "vhost_create_scsi_controller", 00:04:49.213 "thread_set_cpumask", 00:04:49.213 "scheduler_set_options", 00:04:49.213 "framework_get_governor", 00:04:49.213 "framework_get_scheduler", 00:04:49.213 "framework_set_scheduler", 00:04:49.213 "framework_get_reactors", 00:04:49.213 "thread_get_io_channels", 00:04:49.213 "thread_get_pollers", 00:04:49.213 "thread_get_stats", 00:04:49.213 "framework_monitor_context_switch", 00:04:49.213 "spdk_kill_instance", 00:04:49.213 "log_enable_timestamps", 00:04:49.213 "log_get_flags", 00:04:49.213 "log_clear_flag", 00:04:49.213 "log_set_flag", 00:04:49.213 "log_get_level", 00:04:49.213 "log_set_level", 00:04:49.213 "log_get_print_level", 00:04:49.213 "log_set_print_level", 00:04:49.213 "framework_enable_cpumask_locks", 00:04:49.213 "framework_disable_cpumask_locks", 00:04:49.213 "framework_wait_init", 00:04:49.213 "framework_start_init", 00:04:49.213 "scsi_get_devices", 00:04:49.213 "bdev_get_histogram", 00:04:49.213 "bdev_enable_histogram", 00:04:49.213 "bdev_set_qos_limit", 00:04:49.213 "bdev_set_qd_sampling_period", 00:04:49.213 "bdev_get_bdevs", 00:04:49.213 "bdev_reset_iostat", 00:04:49.213 "bdev_get_iostat", 00:04:49.213 "bdev_examine", 00:04:49.213 "bdev_wait_for_examine", 00:04:49.213 "bdev_set_options", 00:04:49.213 "accel_get_stats", 00:04:49.213 "accel_set_options", 00:04:49.213 "accel_set_driver", 00:04:49.213 "accel_crypto_key_destroy", 00:04:49.213 "accel_crypto_keys_get", 00:04:49.213 "accel_crypto_key_create", 00:04:49.213 "accel_assign_opc", 00:04:49.213 "accel_get_module_info", 00:04:49.213 "accel_get_opc_assignments", 00:04:49.213 "vmd_rescan", 00:04:49.213 "vmd_remove_device", 00:04:49.213 "vmd_enable", 00:04:49.213 "sock_get_default_impl", 00:04:49.213 "sock_set_default_impl", 00:04:49.213 "sock_impl_set_options", 00:04:49.213 "sock_impl_get_options", 00:04:49.213 "iobuf_get_stats", 00:04:49.213 "iobuf_set_options", 00:04:49.213 "keyring_get_keys", 00:04:49.213 "vfu_tgt_set_base_path", 00:04:49.213 "framework_get_pci_devices", 00:04:49.213 "framework_get_config", 00:04:49.213 "framework_get_subsystems", 00:04:49.213 "fsdev_set_opts", 00:04:49.213 "fsdev_get_opts", 00:04:49.213 "trace_get_info", 00:04:49.213 "trace_get_tpoint_group_mask", 00:04:49.213 "trace_disable_tpoint_group", 00:04:49.213 "trace_enable_tpoint_group", 00:04:49.213 "trace_clear_tpoint_mask", 00:04:49.213 "trace_set_tpoint_mask", 00:04:49.213 "notify_get_notifications", 00:04:49.213 "notify_get_types", 00:04:49.213 "spdk_get_version", 00:04:49.213 "rpc_get_methods" 00:04:49.213 ] 00:04:49.213 16:47:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.213 16:47:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:49.213 16:47:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1727569 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1727569 ']' 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1727569 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727569 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727569' 00:04:49.213 killing process with pid 1727569 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1727569 00:04:49.213 16:47:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1727569 00:04:49.475 00:04:49.475 real 0m1.549s 00:04:49.475 user 0m2.839s 00:04:49.475 sys 0m0.464s 00:04:49.475 16:47:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.475 16:47:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.475 ************************************ 00:04:49.475 END TEST spdkcli_tcp 00:04:49.475 ************************************ 00:04:49.475 16:47:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.475 16:47:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.475 16:47:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.475 16:47:41 -- common/autotest_common.sh@10 -- # set +x 00:04:49.475 ************************************ 00:04:49.475 START TEST dpdk_mem_utility 00:04:49.475 ************************************ 00:04:49.475 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.737 * Looking for test storage... 00:04:49.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:49.737 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.737 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.737 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.737 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.737 16:47:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.738 16:47:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.738 --rc genhtml_branch_coverage=1 00:04:49.738 --rc genhtml_function_coverage=1 00:04:49.738 --rc genhtml_legend=1 00:04:49.738 --rc geninfo_all_blocks=1 00:04:49.738 --rc geninfo_unexecuted_blocks=1 00:04:49.738 00:04:49.738 ' 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.738 --rc genhtml_branch_coverage=1 00:04:49.738 --rc genhtml_function_coverage=1 00:04:49.738 --rc genhtml_legend=1 00:04:49.738 --rc geninfo_all_blocks=1 00:04:49.738 --rc geninfo_unexecuted_blocks=1 00:04:49.738 00:04:49.738 ' 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.738 --rc genhtml_branch_coverage=1 00:04:49.738 --rc genhtml_function_coverage=1 00:04:49.738 --rc genhtml_legend=1 00:04:49.738 --rc geninfo_all_blocks=1 00:04:49.738 --rc geninfo_unexecuted_blocks=1 00:04:49.738 00:04:49.738 ' 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.738 --rc genhtml_branch_coverage=1 00:04:49.738 --rc genhtml_function_coverage=1 00:04:49.738 --rc genhtml_legend=1 00:04:49.738 --rc geninfo_all_blocks=1 00:04:49.738 --rc geninfo_unexecuted_blocks=1 00:04:49.738 00:04:49.738 ' 00:04:49.738 16:47:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.738 16:47:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1727937 00:04:49.738 16:47:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1727937 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1727937 ']' 00:04:49.738 16:47:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.738 16:47:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.738 [2024-11-20 16:47:41.872126] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:49.738 [2024-11-20 16:47:41.872220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727937 ] 00:04:49.999 [2024-11-20 16:47:41.957877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.999 [2024-11-20 16:47:41.993310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.569 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.569 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:50.569 16:47:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.569 16:47:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.569 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.569 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.569 { 00:04:50.569 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.569 } 00:04:50.569 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.569 16:47:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:50.569 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:50.569 1 heaps totaling size 818.000000 MiB 00:04:50.569 size: 818.000000 MiB heap id: 0 00:04:50.569 end heaps---------- 00:04:50.569 9 mempools totaling size 603.782043 MiB 00:04:50.569 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.569 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.569 size: 100.555481 MiB name: bdev_io_1727937 00:04:50.569 size: 50.003479 MiB name: msgpool_1727937 00:04:50.569 size: 36.509338 MiB name: fsdev_io_1727937 00:04:50.569 size: 21.763794 MiB name: PDU_Pool 00:04:50.569 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.569 size: 4.133484 MiB name: evtpool_1727937 00:04:50.569 size: 0.026123 MiB name: Session_Pool 00:04:50.569 end mempools------- 00:04:50.569 6 memzones totaling size 4.142822 MiB 00:04:50.569 size: 1.000366 MiB name: RG_ring_0_1727937 00:04:50.569 size: 1.000366 MiB name: RG_ring_1_1727937 00:04:50.569 size: 1.000366 MiB name: RG_ring_4_1727937 00:04:50.569 size: 1.000366 MiB name: RG_ring_5_1727937 00:04:50.569 size: 0.125366 MiB name: RG_ring_2_1727937 00:04:50.569 size: 0.015991 MiB name: RG_ring_3_1727937 00:04:50.569 end memzones------- 00:04:50.569 16:47:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.830 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:50.830 list of free elements. size: 10.852478 MiB 00:04:50.830 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:50.830 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:50.830 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:50.830 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:50.830 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:50.830 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:50.830 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:50.830 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:50.830 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:50.830 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:50.830 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:50.830 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:50.830 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:50.830 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:50.830 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:50.830 list of standard malloc elements. size: 199.218628 MiB 00:04:50.830 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:50.830 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:50.830 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:50.830 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:50.830 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:50.830 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:50.830 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:50.830 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:50.830 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:50.830 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:50.830 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:50.830 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:50.830 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:50.830 list of memzone associated elements. size: 607.928894 MiB 00:04:50.830 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:50.830 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.830 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:50.830 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.830 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:50.830 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1727937_0 00:04:50.830 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:50.830 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1727937_0 00:04:50.830 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:50.830 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1727937_0 00:04:50.830 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:50.830 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.830 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:50.831 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.831 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:50.831 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1727937_0 00:04:50.831 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:50.831 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1727937 00:04:50.831 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:50.831 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1727937 00:04:50.831 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:50.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.831 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:50.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.831 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:50.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.831 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:50.831 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.831 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:50.831 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1727937 00:04:50.831 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:50.831 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1727937 00:04:50.831 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:50.831 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1727937 00:04:50.831 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:50.831 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1727937 00:04:50.831 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:50.831 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1727937 00:04:50.831 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:50.831 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1727937 00:04:50.831 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:50.831 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.831 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:50.831 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.831 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:50.831 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.831 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:50.831 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1727937 00:04:50.831 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:50.831 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1727937 00:04:50.831 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:50.831 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.831 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:50.831 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.831 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:50.831 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1727937 00:04:50.831 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:50.831 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.831 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:50.831 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1727937 00:04:50.831 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:50.831 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1727937 00:04:50.831 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:50.831 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1727937 00:04:50.831 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:50.831 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.831 16:47:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.831 16:47:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1727937 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1727937 ']' 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1727937 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727937 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727937' 00:04:50.831 killing process with pid 1727937 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1727937 00:04:50.831 16:47:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1727937 00:04:51.093 00:04:51.093 real 0m1.412s 00:04:51.093 user 0m1.472s 00:04:51.093 sys 0m0.435s 00:04:51.093 16:47:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.093 16:47:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.093 ************************************ 00:04:51.093 END TEST dpdk_mem_utility 00:04:51.093 ************************************ 00:04:51.093 16:47:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.093 16:47:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.093 16:47:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.093 16:47:43 -- common/autotest_common.sh@10 -- # set +x 00:04:51.093 ************************************ 00:04:51.093 START TEST event 00:04:51.093 ************************************ 00:04:51.093 16:47:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.093 * Looking for test storage... 00:04:51.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:51.093 16:47:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.093 16:47:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.093 16:47:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.093 16:47:43 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.093 16:47:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.354 16:47:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.354 16:47:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.354 16:47:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.354 16:47:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.354 16:47:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.354 16:47:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.354 16:47:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.354 16:47:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.354 16:47:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.354 16:47:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.354 16:47:43 event -- scripts/common.sh@344 -- # case "$op" in 00:04:51.354 16:47:43 event -- scripts/common.sh@345 -- # : 1 00:04:51.354 16:47:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.354 16:47:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.354 16:47:43 event -- scripts/common.sh@365 -- # decimal 1 00:04:51.354 16:47:43 event -- scripts/common.sh@353 -- # local d=1 00:04:51.354 16:47:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.354 16:47:43 event -- scripts/common.sh@355 -- # echo 1 00:04:51.354 16:47:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.354 16:47:43 event -- scripts/common.sh@366 -- # decimal 2 00:04:51.354 16:47:43 event -- scripts/common.sh@353 -- # local d=2 00:04:51.354 16:47:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.354 16:47:43 event -- scripts/common.sh@355 -- # echo 2 00:04:51.354 16:47:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.354 16:47:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.354 16:47:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.354 16:47:43 event -- scripts/common.sh@368 -- # return 0 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.354 --rc genhtml_branch_coverage=1 00:04:51.354 --rc genhtml_function_coverage=1 00:04:51.354 --rc genhtml_legend=1 00:04:51.354 --rc geninfo_all_blocks=1 00:04:51.354 --rc geninfo_unexecuted_blocks=1 00:04:51.354 00:04:51.354 ' 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.354 --rc genhtml_branch_coverage=1 00:04:51.354 --rc genhtml_function_coverage=1 00:04:51.354 --rc genhtml_legend=1 00:04:51.354 --rc geninfo_all_blocks=1 00:04:51.354 --rc geninfo_unexecuted_blocks=1 00:04:51.354 00:04:51.354 ' 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.354 --rc genhtml_branch_coverage=1 00:04:51.354 --rc genhtml_function_coverage=1 00:04:51.354 --rc genhtml_legend=1 00:04:51.354 --rc geninfo_all_blocks=1 00:04:51.354 --rc geninfo_unexecuted_blocks=1 00:04:51.354 00:04:51.354 ' 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.354 --rc genhtml_branch_coverage=1 00:04:51.354 --rc genhtml_function_coverage=1 00:04:51.354 --rc genhtml_legend=1 00:04:51.354 --rc geninfo_all_blocks=1 00:04:51.354 --rc geninfo_unexecuted_blocks=1 00:04:51.354 00:04:51.354 ' 00:04:51.354 16:47:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:51.354 16:47:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:51.354 16:47:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:51.354 16:47:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.354 16:47:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.354 ************************************ 00:04:51.354 START TEST event_perf 00:04:51.354 ************************************ 00:04:51.354 16:47:43 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.354 Running I/O for 1 seconds...[2024-11-20 16:47:43.348963] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:51.354 [2024-11-20 16:47:43.349075] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1728257 ] 00:04:51.354 [2024-11-20 16:47:43.440951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.354 [2024-11-20 16:47:43.486232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.354 [2024-11-20 16:47:43.486430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.354 [2024-11-20 16:47:43.486801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.354 Running I/O for 1 seconds...[2024-11-20 16:47:43.486801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.736 00:04:52.736 lcore 0: 180661 00:04:52.736 lcore 1: 180664 00:04:52.736 lcore 2: 180662 00:04:52.736 lcore 3: 180664 00:04:52.736 done. 00:04:52.736 00:04:52.736 real 0m1.188s 00:04:52.736 user 0m4.099s 00:04:52.736 sys 0m0.085s 00:04:52.736 16:47:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.736 16:47:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.736 ************************************ 00:04:52.736 END TEST event_perf 00:04:52.736 ************************************ 00:04:52.736 16:47:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.736 16:47:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:52.736 16:47:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.736 16:47:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.736 ************************************ 00:04:52.736 START TEST event_reactor 00:04:52.736 ************************************ 00:04:52.736 16:47:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.736 [2024-11-20 16:47:44.611898] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:52.736 [2024-11-20 16:47:44.611990] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1728524 ] 00:04:52.736 [2024-11-20 16:47:44.700390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.737 [2024-11-20 16:47:44.730751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.678 test_start 00:04:53.678 oneshot 00:04:53.678 tick 100 00:04:53.678 tick 100 00:04:53.678 tick 250 00:04:53.678 tick 100 00:04:53.678 tick 100 00:04:53.678 tick 100 00:04:53.678 tick 250 00:04:53.678 tick 500 00:04:53.678 tick 100 00:04:53.678 tick 100 00:04:53.678 tick 250 00:04:53.678 tick 100 00:04:53.678 tick 100 00:04:53.678 test_end 00:04:53.678 00:04:53.678 real 0m1.167s 00:04:53.678 user 0m1.088s 00:04:53.678 sys 0m0.075s 00:04:53.678 16:47:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.678 16:47:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.678 ************************************ 00:04:53.678 END TEST event_reactor 00:04:53.678 ************************************ 00:04:53.678 16:47:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.678 16:47:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:53.678 16:47:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.678 16:47:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.678 ************************************ 00:04:53.678 START TEST event_reactor_perf 00:04:53.678 ************************************ 00:04:53.678 16:47:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.937 [2024-11-20 16:47:45.854129] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:53.937 [2024-11-20 16:47:45.854216] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1728874 ] 00:04:53.937 [2024-11-20 16:47:45.943250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.937 [2024-11-20 16:47:45.971783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.878 test_start 00:04:54.878 test_end 00:04:54.878 Performance: 539413 events per second 00:04:54.878 00:04:54.878 real 0m1.165s 00:04:54.878 user 0m1.082s 00:04:54.878 sys 0m0.079s 00:04:54.878 16:47:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.878 16:47:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.878 ************************************ 00:04:54.878 END TEST event_reactor_perf 00:04:54.878 ************************************ 00:04:54.878 16:47:47 event -- event/event.sh@49 -- # uname -s 00:04:54.878 16:47:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.878 16:47:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.878 16:47:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.878 16:47:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.878 16:47:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.139 ************************************ 00:04:55.139 START TEST event_scheduler 00:04:55.139 ************************************ 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:55.139 * Looking for test storage... 00:04:55.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.139 16:47:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.139 --rc genhtml_branch_coverage=1 00:04:55.139 --rc genhtml_function_coverage=1 00:04:55.139 --rc genhtml_legend=1 00:04:55.139 --rc geninfo_all_blocks=1 00:04:55.139 --rc geninfo_unexecuted_blocks=1 00:04:55.139 00:04:55.139 ' 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.139 --rc genhtml_branch_coverage=1 00:04:55.139 --rc genhtml_function_coverage=1 00:04:55.139 --rc genhtml_legend=1 00:04:55.139 --rc geninfo_all_blocks=1 00:04:55.139 --rc geninfo_unexecuted_blocks=1 00:04:55.139 00:04:55.139 ' 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.139 --rc genhtml_branch_coverage=1 00:04:55.139 --rc genhtml_function_coverage=1 00:04:55.139 --rc genhtml_legend=1 00:04:55.139 --rc geninfo_all_blocks=1 00:04:55.139 --rc geninfo_unexecuted_blocks=1 00:04:55.139 00:04:55.139 ' 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.139 --rc genhtml_branch_coverage=1 00:04:55.139 --rc genhtml_function_coverage=1 00:04:55.139 --rc genhtml_legend=1 00:04:55.139 --rc geninfo_all_blocks=1 00:04:55.139 --rc geninfo_unexecuted_blocks=1 00:04:55.139 00:04:55.139 ' 00:04:55.139 16:47:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:55.139 16:47:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1729265 00:04:55.139 16:47:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.139 16:47:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1729265 00:04:55.139 16:47:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1729265 ']' 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.139 16:47:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.400 [2024-11-20 16:47:47.334844] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:04:55.400 [2024-11-20 16:47:47.334900] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729265 ] 00:04:55.400 [2024-11-20 16:47:47.422393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.400 [2024-11-20 16:47:47.468816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.400 [2024-11-20 16:47:47.468977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.400 [2024-11-20 16:47:47.469141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.400 [2024-11-20 16:47:47.469142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.971 16:47:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.971 16:47:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:55.971 16:47:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.971 16:47:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.971 16:47:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 [2024-11-20 16:47:48.147573] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:56.234 [2024-11-20 16:47:48.147592] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.234 [2024-11-20 16:47:48.147603] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.234 [2024-11-20 16:47:48.147609] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.234 [2024-11-20 16:47:48.147614] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 [2024-11-20 16:47:48.211608] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 ************************************ 00:04:56.234 START TEST scheduler_create_thread 00:04:56.234 ************************************ 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 2 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 3 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 4 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 5 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 6 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.234 7 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.234 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.235 8 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.235 9 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.235 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.808 10 00:04:56.808 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.808 16:47:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.808 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.808 16:47:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.193 16:47:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.193 16:47:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:58.193 16:47:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:58.193 16:47:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.193 16:47:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.133 16:47:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.133 16:47:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.133 16:47:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.133 16:47:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.706 16:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.706 16:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.706 16:47:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.706 16:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.706 16:47:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.648 16:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.648 00:05:00.648 real 0m4.223s 00:05:00.648 user 0m0.024s 00:05:00.648 sys 0m0.008s 00:05:00.648 16:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.648 16:47:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.648 ************************************ 00:05:00.648 END TEST scheduler_create_thread 00:05:00.648 ************************************ 00:05:00.648 16:47:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.648 16:47:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1729265 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1729265 ']' 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1729265 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729265 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729265' 00:05:00.648 killing process with pid 1729265 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1729265 00:05:00.648 16:47:52 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1729265 00:05:00.648 [2024-11-20 16:47:52.753262] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.909 00:05:00.909 real 0m5.832s 00:05:00.909 user 0m12.910s 00:05:00.909 sys 0m0.417s 00:05:00.909 16:47:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.909 16:47:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.909 ************************************ 00:05:00.909 END TEST event_scheduler 00:05:00.909 ************************************ 00:05:00.909 16:47:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.909 16:47:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.909 16:47:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.909 16:47:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.909 16:47:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.909 ************************************ 00:05:00.909 START TEST app_repeat 00:05:00.909 ************************************ 00:05:00.909 16:47:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.909 16:47:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1730332 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1730332' 00:05:00.909 Process app_repeat pid: 1730332 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.909 spdk_app_start Round 0 00:05:00.909 16:47:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1730332 /var/tmp/spdk-nbd.sock 00:05:00.909 16:47:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1730332 ']' 00:05:00.909 16:47:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.909 16:47:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.909 16:47:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.909 16:47:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.909 16:47:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.909 [2024-11-20 16:47:53.031277] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:00.909 [2024-11-20 16:47:53.031341] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730332 ] 00:05:01.170 [2024-11-20 16:47:53.117815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.170 [2024-11-20 16:47:53.148670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.170 [2024-11-20 16:47:53.148672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.170 16:47:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.171 16:47:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.171 16:47:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.431 Malloc0 00:05:01.431 16:47:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.431 Malloc1 00:05:01.431 16:47:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.431 16:47:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.692 /dev/nbd0 00:05:01.692 16:47:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.692 16:47:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.692 1+0 records in 00:05:01.692 1+0 records out 00:05:01.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310726 s, 13.2 MB/s 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.692 16:47:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.692 16:47:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.692 16:47:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.692 16:47:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.953 /dev/nbd1 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.953 1+0 records in 00:05:01.953 1+0 records out 00:05:01.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034477 s, 11.9 MB/s 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.953 16:47:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.953 16:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.214 { 00:05:02.214 "nbd_device": "/dev/nbd0", 00:05:02.214 "bdev_name": "Malloc0" 00:05:02.214 }, 00:05:02.214 { 00:05:02.214 "nbd_device": "/dev/nbd1", 00:05:02.214 "bdev_name": "Malloc1" 00:05:02.214 } 00:05:02.214 ]' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.214 { 00:05:02.214 "nbd_device": "/dev/nbd0", 00:05:02.214 "bdev_name": "Malloc0" 00:05:02.214 }, 00:05:02.214 { 00:05:02.214 "nbd_device": "/dev/nbd1", 00:05:02.214 "bdev_name": "Malloc1" 00:05:02.214 } 00:05:02.214 ]' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.214 /dev/nbd1' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.214 /dev/nbd1' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.214 256+0 records in 00:05:02.214 256+0 records out 00:05:02.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120897 s, 86.7 MB/s 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.214 256+0 records in 00:05:02.214 256+0 records out 00:05:02.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119666 s, 87.6 MB/s 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.214 256+0 records in 00:05:02.214 256+0 records out 00:05:02.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125226 s, 83.7 MB/s 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.214 16:47:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.475 16:47:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.476 16:47:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.736 16:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.996 16:47:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.996 16:47:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.996 16:47:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.256 [2024-11-20 16:47:55.235407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.256 [2024-11-20 16:47:55.264235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.256 [2024-11-20 16:47:55.264252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.256 [2024-11-20 16:47:55.293493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.256 [2024-11-20 16:47:55.293523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.573 16:47:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.573 16:47:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.573 spdk_app_start Round 1 00:05:06.573 16:47:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1730332 /var/tmp/spdk-nbd.sock 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1730332 ']' 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.573 16:47:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.573 16:47:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.573 Malloc0 00:05:06.573 16:47:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.573 Malloc1 00:05:06.573 16:47:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.573 16:47:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.833 /dev/nbd0 00:05:06.833 16:47:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.833 16:47:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.833 1+0 records in 00:05:06.833 1+0 records out 00:05:06.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275276 s, 14.9 MB/s 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.833 16:47:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.833 16:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.833 16:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.833 16:47:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.093 /dev/nbd1 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.093 1+0 records in 00:05:07.093 1+0 records out 00:05:07.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202805 s, 20.2 MB/s 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.093 16:47:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.093 16:47:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.352 { 00:05:07.352 "nbd_device": "/dev/nbd0", 00:05:07.352 "bdev_name": "Malloc0" 00:05:07.352 }, 00:05:07.352 { 00:05:07.352 "nbd_device": "/dev/nbd1", 00:05:07.352 "bdev_name": "Malloc1" 00:05:07.352 } 00:05:07.352 ]' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.352 { 00:05:07.352 "nbd_device": "/dev/nbd0", 00:05:07.352 "bdev_name": "Malloc0" 00:05:07.352 }, 00:05:07.352 { 00:05:07.352 "nbd_device": "/dev/nbd1", 00:05:07.352 "bdev_name": "Malloc1" 00:05:07.352 } 00:05:07.352 ]' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.352 /dev/nbd1' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.352 /dev/nbd1' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.352 256+0 records in 00:05:07.352 256+0 records out 00:05:07.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127255 s, 82.4 MB/s 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.352 256+0 records in 00:05:07.352 256+0 records out 00:05:07.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124488 s, 84.2 MB/s 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.352 256+0 records in 00:05:07.352 256+0 records out 00:05:07.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127468 s, 82.3 MB/s 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.352 16:47:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.612 16:47:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.871 16:47:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.132 16:48:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.132 16:48:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.132 16:48:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.393 [2024-11-20 16:48:00.383458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.393 [2024-11-20 16:48:00.412193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.393 [2024-11-20 16:48:00.412220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.393 [2024-11-20 16:48:00.441808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.393 [2024-11-20 16:48:00.441839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.691 16:48:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.691 16:48:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.691 spdk_app_start Round 2 00:05:11.691 16:48:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1730332 /var/tmp/spdk-nbd.sock 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1730332 ']' 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.691 16:48:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.691 16:48:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.691 Malloc0 00:05:11.692 16:48:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.692 Malloc1 00:05:11.951 16:48:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.951 16:48:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.951 /dev/nbd0 00:05:11.951 16:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.951 16:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.951 16:48:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:11.951 16:48:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.951 16:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.951 16:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.951 16:48:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.952 1+0 records in 00:05:11.952 1+0 records out 00:05:11.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306031 s, 13.4 MB/s 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.952 16:48:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.952 16:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.952 16:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.952 16:48:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.211 /dev/nbd1 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.212 1+0 records in 00:05:12.212 1+0 records out 00:05:12.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275895 s, 14.8 MB/s 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.212 16:48:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.212 16:48:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.472 { 00:05:12.472 "nbd_device": "/dev/nbd0", 00:05:12.472 "bdev_name": "Malloc0" 00:05:12.472 }, 00:05:12.472 { 00:05:12.472 "nbd_device": "/dev/nbd1", 00:05:12.472 "bdev_name": "Malloc1" 00:05:12.472 } 00:05:12.472 ]' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.472 { 00:05:12.472 "nbd_device": "/dev/nbd0", 00:05:12.472 "bdev_name": "Malloc0" 00:05:12.472 }, 00:05:12.472 { 00:05:12.472 "nbd_device": "/dev/nbd1", 00:05:12.472 "bdev_name": "Malloc1" 00:05:12.472 } 00:05:12.472 ]' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.472 /dev/nbd1' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.472 /dev/nbd1' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.472 256+0 records in 00:05:12.472 256+0 records out 00:05:12.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127242 s, 82.4 MB/s 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.472 256+0 records in 00:05:12.472 256+0 records out 00:05:12.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121734 s, 86.1 MB/s 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.472 256+0 records in 00:05:12.472 256+0 records out 00:05:12.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012672 s, 82.7 MB/s 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.472 16:48:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.473 16:48:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.473 16:48:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.473 16:48:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.473 16:48:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.734 16:48:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.994 16:48:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.255 16:48:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.255 16:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.256 16:48:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.256 16:48:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.517 16:48:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.517 [2024-11-20 16:48:05.544588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.517 [2024-11-20 16:48:05.573279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.517 [2024-11-20 16:48:05.573280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.517 [2024-11-20 16:48:05.602352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.517 [2024-11-20 16:48:05.602383] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.815 16:48:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1730332 /var/tmp/spdk-nbd.sock 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1730332 ']' 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:16.815 16:48:08 event.app_repeat -- event/event.sh@39 -- # killprocess 1730332 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1730332 ']' 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1730332 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1730332 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1730332' 00:05:16.815 killing process with pid 1730332 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1730332 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1730332 00:05:16.815 spdk_app_start is called in Round 0. 00:05:16.815 Shutdown signal received, stop current app iteration 00:05:16.815 Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 reinitialization... 00:05:16.815 spdk_app_start is called in Round 1. 00:05:16.815 Shutdown signal received, stop current app iteration 00:05:16.815 Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 reinitialization... 00:05:16.815 spdk_app_start is called in Round 2. 00:05:16.815 Shutdown signal received, stop current app iteration 00:05:16.815 Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 reinitialization... 00:05:16.815 spdk_app_start is called in Round 3. 00:05:16.815 Shutdown signal received, stop current app iteration 00:05:16.815 16:48:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.815 16:48:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.815 00:05:16.815 real 0m15.814s 00:05:16.815 user 0m34.761s 00:05:16.815 sys 0m2.279s 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.815 16:48:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.815 ************************************ 00:05:16.815 END TEST app_repeat 00:05:16.815 ************************************ 00:05:16.815 16:48:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.815 16:48:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.815 16:48:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.815 16:48:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.815 16:48:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.815 ************************************ 00:05:16.815 START TEST cpu_locks 00:05:16.815 ************************************ 00:05:16.815 16:48:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.815 * Looking for test storage... 00:05:16.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:16.815 16:48:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.078 16:48:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.078 16:48:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.078 16:48:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.078 --rc genhtml_branch_coverage=1 00:05:17.078 --rc genhtml_function_coverage=1 00:05:17.078 --rc genhtml_legend=1 00:05:17.078 --rc geninfo_all_blocks=1 00:05:17.078 --rc geninfo_unexecuted_blocks=1 00:05:17.078 00:05:17.078 ' 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.078 --rc genhtml_branch_coverage=1 00:05:17.078 --rc genhtml_function_coverage=1 00:05:17.078 --rc genhtml_legend=1 00:05:17.078 --rc geninfo_all_blocks=1 00:05:17.078 --rc geninfo_unexecuted_blocks=1 00:05:17.078 00:05:17.078 ' 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.078 --rc genhtml_branch_coverage=1 00:05:17.078 --rc genhtml_function_coverage=1 00:05:17.078 --rc genhtml_legend=1 00:05:17.078 --rc geninfo_all_blocks=1 00:05:17.078 --rc geninfo_unexecuted_blocks=1 00:05:17.078 00:05:17.078 ' 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.078 --rc genhtml_branch_coverage=1 00:05:17.078 --rc genhtml_function_coverage=1 00:05:17.078 --rc genhtml_legend=1 00:05:17.078 --rc geninfo_all_blocks=1 00:05:17.078 --rc geninfo_unexecuted_blocks=1 00:05:17.078 00:05:17.078 ' 00:05:17.078 16:48:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:17.078 16:48:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:17.078 16:48:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:17.078 16:48:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.078 16:48:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.078 ************************************ 00:05:17.078 START TEST default_locks 00:05:17.078 ************************************ 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1734038 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1734038 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1734038 ']' 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.078 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.078 [2024-11-20 16:48:09.187344] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:17.078 [2024-11-20 16:48:09.187410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734038 ] 00:05:17.339 [2024-11-20 16:48:09.273905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.339 [2024-11-20 16:48:09.309077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.910 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.910 16:48:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:17.910 16:48:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1734038 00:05:17.910 16:48:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1734038 00:05:17.910 16:48:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.480 lslocks: write error 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1734038 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1734038 ']' 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1734038 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1734038 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1734038' 00:05:18.480 killing process with pid 1734038 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1734038 00:05:18.480 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1734038 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1734038 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1734038 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1734038 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1734038 ']' 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1734038) - No such process 00:05:18.740 ERROR: process (pid: 1734038) is no longer running 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.740 00:05:18.740 real 0m1.651s 00:05:18.740 user 0m1.760s 00:05:18.740 sys 0m0.592s 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.740 16:48:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.740 ************************************ 00:05:18.740 END TEST default_locks 00:05:18.740 ************************************ 00:05:18.740 16:48:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.740 16:48:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.740 16:48:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.740 16:48:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.740 ************************************ 00:05:18.740 START TEST default_locks_via_rpc 00:05:18.740 ************************************ 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1734761 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1734761 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1734761 ']' 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.740 16:48:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.740 [2024-11-20 16:48:10.911391] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:18.740 [2024-11-20 16:48:10.911443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734761 ] 00:05:19.000 [2024-11-20 16:48:10.995813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.000 [2024-11-20 16:48:11.027798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1734761 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1734761 00:05:19.573 16:48:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1734761 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1734761 ']' 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1734761 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1734761 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1734761' 00:05:20.145 killing process with pid 1734761 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1734761 00:05:20.145 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1734761 00:05:20.405 00:05:20.405 real 0m1.492s 00:05:20.405 user 0m1.605s 00:05:20.405 sys 0m0.524s 00:05:20.405 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.405 16:48:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.405 ************************************ 00:05:20.405 END TEST default_locks_via_rpc 00:05:20.405 ************************************ 00:05:20.405 16:48:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.405 16:48:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.405 16:48:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.405 16:48:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.405 ************************************ 00:05:20.405 START TEST non_locking_app_on_locked_coremask 00:05:20.405 ************************************ 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1735178 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1735178 /var/tmp/spdk.sock 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1735178 ']' 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.405 16:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.406 [2024-11-20 16:48:12.478316] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:20.406 [2024-11-20 16:48:12.478372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735178 ] 00:05:20.406 [2024-11-20 16:48:12.564944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.665 [2024-11-20 16:48:12.597243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1735233 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1735233 /var/tmp/spdk2.sock 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1735233 ']' 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.235 16:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.235 [2024-11-20 16:48:13.321561] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:21.235 [2024-11-20 16:48:13.321615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735233 ] 00:05:21.235 [2024-11-20 16:48:13.407522] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.235 [2024-11-20 16:48:13.407545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.494 [2024-11-20 16:48:13.469858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.064 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.064 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.064 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1735178 00:05:22.064 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1735178 00:05:22.064 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.635 lslocks: write error 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1735178 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1735178 ']' 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1735178 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735178 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735178' 00:05:22.635 killing process with pid 1735178 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1735178 00:05:22.635 16:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1735178 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1735233 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1735233 ']' 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1735233 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735233 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735233' 00:05:23.205 killing process with pid 1735233 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1735233 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1735233 00:05:23.205 00:05:23.205 real 0m2.950s 00:05:23.205 user 0m3.263s 00:05:23.205 sys 0m0.933s 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.205 16:48:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.205 ************************************ 00:05:23.205 END TEST non_locking_app_on_locked_coremask 00:05:23.205 ************************************ 00:05:23.464 16:48:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.464 16:48:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.464 16:48:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.464 16:48:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.464 ************************************ 00:05:23.464 START TEST locking_app_on_unlocked_coremask 00:05:23.464 ************************************ 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1735706 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1735706 /var/tmp/spdk.sock 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1735706 ']' 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.464 16:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.464 [2024-11-20 16:48:15.506578] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:23.464 [2024-11-20 16:48:15.506636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735706 ] 00:05:23.464 [2024-11-20 16:48:15.592759] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.464 [2024-11-20 16:48:15.592797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.464 [2024-11-20 16:48:15.633929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1735945 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1735945 /var/tmp/spdk2.sock 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1735945 ']' 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.404 16:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.404 [2024-11-20 16:48:16.356022] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:24.405 [2024-11-20 16:48:16.356072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735945 ] 00:05:24.405 [2024-11-20 16:48:16.442194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.405 [2024-11-20 16:48:16.504581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.344 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.344 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.345 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1735945 00:05:25.345 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1735945 00:05:25.345 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.915 lslocks: write error 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1735706 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1735706 ']' 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1735706 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735706 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735706' 00:05:25.915 killing process with pid 1735706 00:05:25.915 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1735706 00:05:25.916 16:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1735706 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1735945 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1735945 ']' 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1735945 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735945 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735945' 00:05:26.176 killing process with pid 1735945 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1735945 00:05:26.176 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1735945 00:05:26.437 00:05:26.437 real 0m3.065s 00:05:26.437 user 0m3.413s 00:05:26.437 sys 0m0.937s 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.437 ************************************ 00:05:26.437 END TEST locking_app_on_unlocked_coremask 00:05:26.437 ************************************ 00:05:26.437 16:48:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:26.437 16:48:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.437 16:48:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.437 16:48:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.437 ************************************ 00:05:26.437 START TEST locking_app_on_locked_coremask 00:05:26.437 ************************************ 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1736321 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1736321 /var/tmp/spdk.sock 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736321 ']' 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.437 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.699 [2024-11-20 16:48:18.647658] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:26.699 [2024-11-20 16:48:18.647706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736321 ] 00:05:26.699 [2024-11-20 16:48:18.708295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.699 [2024-11-20 16:48:18.738694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1736456 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1736456 /var/tmp/spdk2.sock 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1736456 /var/tmp/spdk2.sock 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1736456 /var/tmp/spdk2.sock 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736456 ']' 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.960 16:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.960 [2024-11-20 16:48:18.971369] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:26.960 [2024-11-20 16:48:18.971421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736456 ] 00:05:26.960 [2024-11-20 16:48:19.061238] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1736321 has claimed it. 00:05:26.960 [2024-11-20 16:48:19.061274] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1736456) - No such process 00:05:27.529 ERROR: process (pid: 1736456) is no longer running 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1736321 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1736321 00:05:27.529 16:48:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.099 lslocks: write error 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1736321 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1736321 ']' 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1736321 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736321 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736321' 00:05:28.099 killing process with pid 1736321 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1736321 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1736321 00:05:28.099 00:05:28.099 real 0m1.668s 00:05:28.099 user 0m1.908s 00:05:28.099 sys 0m0.567s 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.099 16:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 ************************************ 00:05:28.099 END TEST locking_app_on_locked_coremask 00:05:28.100 ************************************ 00:05:28.361 16:48:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.361 16:48:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.361 16:48:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.361 16:48:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 ************************************ 00:05:28.361 START TEST locking_overlapped_coremask 00:05:28.361 ************************************ 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1736702 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1736702 /var/tmp/spdk.sock 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736702 ']' 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.361 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.361 [2024-11-20 16:48:20.399684] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:28.361 [2024-11-20 16:48:20.399737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736702 ] 00:05:28.361 [2024-11-20 16:48:20.461199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.361 [2024-11-20 16:48:20.493423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.361 [2024-11-20 16:48:20.493447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.361 [2024-11-20 16:48:20.493448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1736855 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1736855 /var/tmp/spdk2.sock 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1736855 /var/tmp/spdk2.sock 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1736855 /var/tmp/spdk2.sock 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1736855 ']' 00:05:28.621 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.622 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.622 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.622 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.622 16:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.622 [2024-11-20 16:48:20.722533] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:28.622 [2024-11-20 16:48:20.722588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736855 ] 00:05:28.882 [2024-11-20 16:48:20.837908] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1736702 has claimed it. 00:05:28.882 [2024-11-20 16:48:20.837950] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1736855) - No such process 00:05:29.522 ERROR: process (pid: 1736855) is no longer running 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1736702 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1736702 ']' 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1736702 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736702 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736702' 00:05:29.522 killing process with pid 1736702 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1736702 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1736702 00:05:29.522 00:05:29.522 real 0m1.262s 00:05:29.522 user 0m3.569s 00:05:29.522 sys 0m0.338s 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.522 ************************************ 00:05:29.522 END TEST locking_overlapped_coremask 00:05:29.522 ************************************ 00:05:29.522 16:48:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:29.522 16:48:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.522 16:48:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.522 16:48:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.522 ************************************ 00:05:29.522 START TEST locking_overlapped_coremask_via_rpc 00:05:29.522 ************************************ 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1737061 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1737061 /var/tmp/spdk.sock 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737061 ']' 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.522 16:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.802 [2024-11-20 16:48:21.721913] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:29.802 [2024-11-20 16:48:21.721965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737061 ] 00:05:29.802 [2024-11-20 16:48:21.803540] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.802 [2024-11-20 16:48:21.803561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.802 [2024-11-20 16:48:21.836479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.802 [2024-11-20 16:48:21.836626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.802 [2024-11-20 16:48:21.836628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1737309 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1737309 /var/tmp/spdk2.sock 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737309 ']' 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.372 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.373 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.373 16:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.633 [2024-11-20 16:48:22.558513] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:30.633 [2024-11-20 16:48:22.558567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737309 ] 00:05:30.633 [2024-11-20 16:48:22.668399] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.633 [2024-11-20 16:48:22.668425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.633 [2024-11-20 16:48:22.742025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.633 [2024-11-20 16:48:22.745280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.633 [2024-11-20 16:48:22.745281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.203 [2024-11-20 16:48:23.361239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1737061 has claimed it. 00:05:31.203 request: 00:05:31.203 { 00:05:31.203 "method": "framework_enable_cpumask_locks", 00:05:31.203 "req_id": 1 00:05:31.203 } 00:05:31.203 Got JSON-RPC error response 00:05:31.203 response: 00:05:31.203 { 00:05:31.203 "code": -32603, 00:05:31.203 "message": "Failed to claim CPU core: 2" 00:05:31.203 } 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1737061 /var/tmp/spdk.sock 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737061 ']' 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.203 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1737309 /var/tmp/spdk2.sock 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1737309 ']' 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.463 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.724 00:05:31.724 real 0m2.072s 00:05:31.724 user 0m0.858s 00:05:31.724 sys 0m0.145s 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.724 16:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.724 ************************************ 00:05:31.724 END TEST locking_overlapped_coremask_via_rpc 00:05:31.724 ************************************ 00:05:31.724 16:48:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.724 16:48:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1737061 ]] 00:05:31.724 16:48:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1737061 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737061 ']' 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737061 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737061 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737061' 00:05:31.724 killing process with pid 1737061 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1737061 00:05:31.724 16:48:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1737061 00:05:31.985 16:48:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1737309 ]] 00:05:31.985 16:48:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1737309 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737309 ']' 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737309 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737309 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737309' 00:05:31.985 killing process with pid 1737309 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1737309 00:05:31.985 16:48:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1737309 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1737061 ]] 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1737061 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737061 ']' 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737061 00:05:32.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1737061) - No such process 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1737061 is not found' 00:05:32.245 Process with pid 1737061 is not found 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1737309 ]] 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1737309 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1737309 ']' 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1737309 00:05:32.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1737309) - No such process 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1737309 is not found' 00:05:32.245 Process with pid 1737309 is not found 00:05:32.245 16:48:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.245 00:05:32.245 real 0m15.464s 00:05:32.245 user 0m26.504s 00:05:32.245 sys 0m4.985s 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.245 16:48:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.245 ************************************ 00:05:32.245 END TEST cpu_locks 00:05:32.245 ************************************ 00:05:32.245 00:05:32.245 real 0m41.309s 00:05:32.245 user 1m20.735s 00:05:32.245 sys 0m8.347s 00:05:32.246 16:48:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.246 16:48:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.246 ************************************ 00:05:32.246 END TEST event 00:05:32.246 ************************************ 00:05:32.505 16:48:24 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.505 16:48:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.505 16:48:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.505 16:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:32.505 ************************************ 00:05:32.505 START TEST thread 00:05:32.505 ************************************ 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.505 * Looking for test storage... 00:05:32.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.505 16:48:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.505 16:48:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.505 16:48:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.505 16:48:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.505 16:48:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.505 16:48:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.505 16:48:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.505 16:48:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.505 16:48:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.505 16:48:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.505 16:48:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.505 16:48:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:32.505 16:48:24 thread -- scripts/common.sh@345 -- # : 1 00:05:32.505 16:48:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.505 16:48:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.505 16:48:24 thread -- scripts/common.sh@365 -- # decimal 1 00:05:32.505 16:48:24 thread -- scripts/common.sh@353 -- # local d=1 00:05:32.505 16:48:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.505 16:48:24 thread -- scripts/common.sh@355 -- # echo 1 00:05:32.505 16:48:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.505 16:48:24 thread -- scripts/common.sh@366 -- # decimal 2 00:05:32.505 16:48:24 thread -- scripts/common.sh@353 -- # local d=2 00:05:32.505 16:48:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.505 16:48:24 thread -- scripts/common.sh@355 -- # echo 2 00:05:32.505 16:48:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.505 16:48:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.505 16:48:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.505 16:48:24 thread -- scripts/common.sh@368 -- # return 0 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.505 --rc genhtml_branch_coverage=1 00:05:32.505 --rc genhtml_function_coverage=1 00:05:32.505 --rc genhtml_legend=1 00:05:32.505 --rc geninfo_all_blocks=1 00:05:32.505 --rc geninfo_unexecuted_blocks=1 00:05:32.505 00:05:32.505 ' 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.505 --rc genhtml_branch_coverage=1 00:05:32.505 --rc genhtml_function_coverage=1 00:05:32.505 --rc genhtml_legend=1 00:05:32.505 --rc geninfo_all_blocks=1 00:05:32.505 --rc geninfo_unexecuted_blocks=1 00:05:32.505 00:05:32.505 ' 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.505 --rc genhtml_branch_coverage=1 00:05:32.505 --rc genhtml_function_coverage=1 00:05:32.505 --rc genhtml_legend=1 00:05:32.505 --rc geninfo_all_blocks=1 00:05:32.505 --rc geninfo_unexecuted_blocks=1 00:05:32.505 00:05:32.505 ' 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.505 --rc genhtml_branch_coverage=1 00:05:32.505 --rc genhtml_function_coverage=1 00:05:32.505 --rc genhtml_legend=1 00:05:32.505 --rc geninfo_all_blocks=1 00:05:32.505 --rc geninfo_unexecuted_blocks=1 00:05:32.505 00:05:32.505 ' 00:05:32.505 16:48:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.505 16:48:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.765 ************************************ 00:05:32.765 START TEST thread_poller_perf 00:05:32.765 ************************************ 00:05:32.765 16:48:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.765 [2024-11-20 16:48:24.737640] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:32.765 [2024-11-20 16:48:24.737733] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737844 ] 00:05:32.765 [2024-11-20 16:48:24.827985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.765 [2024-11-20 16:48:24.868751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.765 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:34.147 [2024-11-20T15:48:26.323Z] ====================================== 00:05:34.147 [2024-11-20T15:48:26.323Z] busy:2407149110 (cyc) 00:05:34.147 [2024-11-20T15:48:26.323Z] total_run_count: 418000 00:05:34.147 [2024-11-20T15:48:26.323Z] tsc_hz: 2400000000 (cyc) 00:05:34.147 [2024-11-20T15:48:26.323Z] ====================================== 00:05:34.147 [2024-11-20T15:48:26.323Z] poller_cost: 5758 (cyc), 2399 (nsec) 00:05:34.147 00:05:34.147 real 0m1.186s 00:05:34.147 user 0m1.095s 00:05:34.147 sys 0m0.086s 00:05:34.147 16:48:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.147 16:48:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.147 ************************************ 00:05:34.147 END TEST thread_poller_perf 00:05:34.147 ************************************ 00:05:34.147 16:48:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.147 16:48:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.147 16:48:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.147 16:48:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.147 ************************************ 00:05:34.147 START TEST thread_poller_perf 00:05:34.147 ************************************ 00:05:34.147 16:48:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.147 [2024-11-20 16:48:26.001665] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:34.147 [2024-11-20 16:48:26.001766] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738161 ] 00:05:34.147 [2024-11-20 16:48:26.090321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.147 [2024-11-20 16:48:26.128630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.147 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.086 [2024-11-20T15:48:27.262Z] ====================================== 00:05:35.086 [2024-11-20T15:48:27.262Z] busy:2401719064 (cyc) 00:05:35.086 [2024-11-20T15:48:27.262Z] total_run_count: 5559000 00:05:35.086 [2024-11-20T15:48:27.262Z] tsc_hz: 2400000000 (cyc) 00:05:35.086 [2024-11-20T15:48:27.262Z] ====================================== 00:05:35.086 [2024-11-20T15:48:27.262Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:35.086 00:05:35.086 real 0m1.176s 00:05:35.086 user 0m1.098s 00:05:35.086 sys 0m0.075s 00:05:35.086 16:48:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.086 16:48:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.086 ************************************ 00:05:35.086 END TEST thread_poller_perf 00:05:35.086 ************************************ 00:05:35.086 16:48:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.086 00:05:35.086 real 0m2.722s 00:05:35.086 user 0m2.354s 00:05:35.086 sys 0m0.384s 00:05:35.086 16:48:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.086 16:48:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.086 ************************************ 00:05:35.086 END TEST thread 00:05:35.086 ************************************ 00:05:35.086 16:48:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:35.086 16:48:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:35.086 16:48:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.086 16:48:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.086 16:48:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.348 ************************************ 00:05:35.348 START TEST app_cmdline 00:05:35.348 ************************************ 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:35.348 * Looking for test storage... 00:05:35.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.348 16:48:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.348 --rc genhtml_branch_coverage=1 00:05:35.348 --rc genhtml_function_coverage=1 00:05:35.348 --rc genhtml_legend=1 00:05:35.348 --rc geninfo_all_blocks=1 00:05:35.348 --rc geninfo_unexecuted_blocks=1 00:05:35.348 00:05:35.348 ' 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.348 --rc genhtml_branch_coverage=1 00:05:35.348 --rc genhtml_function_coverage=1 00:05:35.348 --rc genhtml_legend=1 00:05:35.348 --rc geninfo_all_blocks=1 00:05:35.348 --rc geninfo_unexecuted_blocks=1 00:05:35.348 00:05:35.348 ' 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.348 --rc genhtml_branch_coverage=1 00:05:35.348 --rc genhtml_function_coverage=1 00:05:35.348 --rc genhtml_legend=1 00:05:35.348 --rc geninfo_all_blocks=1 00:05:35.348 --rc geninfo_unexecuted_blocks=1 00:05:35.348 00:05:35.348 ' 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.348 --rc genhtml_branch_coverage=1 00:05:35.348 --rc genhtml_function_coverage=1 00:05:35.348 --rc genhtml_legend=1 00:05:35.348 --rc geninfo_all_blocks=1 00:05:35.348 --rc geninfo_unexecuted_blocks=1 00:05:35.348 00:05:35.348 ' 00:05:35.348 16:48:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:35.348 16:48:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1738474 00:05:35.348 16:48:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1738474 00:05:35.348 16:48:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1738474 ']' 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.348 16:48:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.609 [2024-11-20 16:48:27.544859] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:35.609 [2024-11-20 16:48:27.544932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738474 ] 00:05:35.609 [2024-11-20 16:48:27.632075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.609 [2024-11-20 16:48:27.668179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.180 16:48:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.180 16:48:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:36.180 16:48:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:36.442 { 00:05:36.443 "version": "SPDK v25.01-pre git sha1 325a79ea3", 00:05:36.443 "fields": { 00:05:36.443 "major": 25, 00:05:36.443 "minor": 1, 00:05:36.443 "patch": 0, 00:05:36.443 "suffix": "-pre", 00:05:36.443 "commit": "325a79ea3" 00:05:36.443 } 00:05:36.443 } 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:36.443 16:48:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:36.443 16:48:28 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.704 request: 00:05:36.704 { 00:05:36.704 "method": "env_dpdk_get_mem_stats", 00:05:36.704 "req_id": 1 00:05:36.704 } 00:05:36.704 Got JSON-RPC error response 00:05:36.704 response: 00:05:36.704 { 00:05:36.704 "code": -32601, 00:05:36.704 "message": "Method not found" 00:05:36.704 } 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.704 16:48:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1738474 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1738474 ']' 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1738474 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738474 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738474' 00:05:36.704 killing process with pid 1738474 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@973 -- # kill 1738474 00:05:36.704 16:48:28 app_cmdline -- common/autotest_common.sh@978 -- # wait 1738474 00:05:36.965 00:05:36.965 real 0m1.702s 00:05:36.965 user 0m2.023s 00:05:36.965 sys 0m0.468s 00:05:36.965 16:48:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.965 16:48:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.965 ************************************ 00:05:36.965 END TEST app_cmdline 00:05:36.965 ************************************ 00:05:36.965 16:48:29 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:36.965 16:48:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.965 16:48:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.965 16:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.965 ************************************ 00:05:36.965 START TEST version 00:05:36.965 ************************************ 00:05:36.965 16:48:29 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:37.225 * Looking for test storage... 00:05:37.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:37.225 16:48:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.225 16:48:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.225 16:48:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.225 16:48:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.225 16:48:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.225 16:48:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.225 16:48:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.225 16:48:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.225 16:48:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.225 16:48:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.225 16:48:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.225 16:48:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.225 16:48:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.225 16:48:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.225 16:48:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.225 16:48:29 version -- scripts/common.sh@344 -- # case "$op" in 00:05:37.225 16:48:29 version -- scripts/common.sh@345 -- # : 1 00:05:37.225 16:48:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.225 16:48:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.225 16:48:29 version -- scripts/common.sh@365 -- # decimal 1 00:05:37.225 16:48:29 version -- scripts/common.sh@353 -- # local d=1 00:05:37.225 16:48:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.225 16:48:29 version -- scripts/common.sh@355 -- # echo 1 00:05:37.225 16:48:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.225 16:48:29 version -- scripts/common.sh@366 -- # decimal 2 00:05:37.225 16:48:29 version -- scripts/common.sh@353 -- # local d=2 00:05:37.225 16:48:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.225 16:48:29 version -- scripts/common.sh@355 -- # echo 2 00:05:37.225 16:48:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.225 16:48:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.225 16:48:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.226 16:48:29 version -- scripts/common.sh@368 -- # return 0 00:05:37.226 16:48:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.226 16:48:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.226 --rc genhtml_branch_coverage=1 00:05:37.226 --rc genhtml_function_coverage=1 00:05:37.226 --rc genhtml_legend=1 00:05:37.226 --rc geninfo_all_blocks=1 00:05:37.226 --rc geninfo_unexecuted_blocks=1 00:05:37.226 00:05:37.226 ' 00:05:37.226 16:48:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.226 --rc genhtml_branch_coverage=1 00:05:37.226 --rc genhtml_function_coverage=1 00:05:37.226 --rc genhtml_legend=1 00:05:37.226 --rc geninfo_all_blocks=1 00:05:37.226 --rc geninfo_unexecuted_blocks=1 00:05:37.226 00:05:37.226 ' 00:05:37.226 16:48:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.226 --rc genhtml_branch_coverage=1 00:05:37.226 --rc genhtml_function_coverage=1 00:05:37.226 --rc genhtml_legend=1 00:05:37.226 --rc geninfo_all_blocks=1 00:05:37.226 --rc geninfo_unexecuted_blocks=1 00:05:37.226 00:05:37.226 ' 00:05:37.226 16:48:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.226 --rc genhtml_branch_coverage=1 00:05:37.226 --rc genhtml_function_coverage=1 00:05:37.226 --rc genhtml_legend=1 00:05:37.226 --rc geninfo_all_blocks=1 00:05:37.226 --rc geninfo_unexecuted_blocks=1 00:05:37.226 00:05:37.226 ' 00:05:37.226 16:48:29 version -- app/version.sh@17 -- # get_header_version major 00:05:37.226 16:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.226 16:48:29 version -- app/version.sh@17 -- # major=25 00:05:37.226 16:48:29 version -- app/version.sh@18 -- # get_header_version minor 00:05:37.226 16:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.226 16:48:29 version -- app/version.sh@18 -- # minor=1 00:05:37.226 16:48:29 version -- app/version.sh@19 -- # get_header_version patch 00:05:37.226 16:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.226 16:48:29 version -- app/version.sh@19 -- # patch=0 00:05:37.226 16:48:29 version -- app/version.sh@20 -- # get_header_version suffix 00:05:37.226 16:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:37.226 16:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.226 16:48:29 version -- app/version.sh@20 -- # suffix=-pre 00:05:37.226 16:48:29 version -- app/version.sh@22 -- # version=25.1 00:05:37.226 16:48:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:37.226 16:48:29 version -- app/version.sh@28 -- # version=25.1rc0 00:05:37.226 16:48:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:37.226 16:48:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:37.226 16:48:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:37.226 16:48:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:37.226 00:05:37.226 real 0m0.286s 00:05:37.226 user 0m0.167s 00:05:37.226 sys 0m0.169s 00:05:37.226 16:48:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.226 16:48:29 version -- common/autotest_common.sh@10 -- # set +x 00:05:37.226 ************************************ 00:05:37.226 END TEST version 00:05:37.226 ************************************ 00:05:37.226 16:48:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:37.226 16:48:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:37.226 16:48:29 -- spdk/autotest.sh@194 -- # uname -s 00:05:37.226 16:48:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:37.226 16:48:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.226 16:48:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.226 16:48:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:37.226 16:48:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:37.226 16:48:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:37.226 16:48:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.226 16:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.486 16:48:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:37.486 16:48:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:37.486 16:48:29 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:37.486 16:48:29 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:37.486 16:48:29 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:37.486 16:48:29 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:37.486 16:48:29 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:37.486 16:48:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:37.486 16:48:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.486 16:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.486 ************************************ 00:05:37.486 START TEST nvmf_tcp 00:05:37.486 ************************************ 00:05:37.486 16:48:29 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:37.486 * Looking for test storage... 00:05:37.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:37.486 16:48:29 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.486 16:48:29 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.486 16:48:29 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.749 16:48:29 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:37.749 16:48:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:37.749 16:48:29 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.749 16:48:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.749 ************************************ 00:05:37.749 START TEST nvmf_target_core 00:05:37.749 ************************************ 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:37.749 * Looking for test storage... 00:05:37.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.749 --rc genhtml_branch_coverage=1 00:05:37.749 --rc genhtml_function_coverage=1 00:05:37.749 --rc genhtml_legend=1 00:05:37.749 --rc geninfo_all_blocks=1 00:05:37.749 --rc geninfo_unexecuted_blocks=1 00:05:37.749 00:05:37.749 ' 00:05:37.749 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:38.011 16:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:38.012 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:38.012 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.012 16:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:38.012 ************************************ 00:05:38.012 START TEST nvmf_abort 00:05:38.012 ************************************ 00:05:38.012 16:48:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:38.012 * Looking for test storage... 00:05:38.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.012 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.274 --rc genhtml_branch_coverage=1 00:05:38.274 --rc genhtml_function_coverage=1 00:05:38.274 --rc genhtml_legend=1 00:05:38.274 --rc geninfo_all_blocks=1 00:05:38.274 --rc geninfo_unexecuted_blocks=1 00:05:38.274 00:05:38.274 ' 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.274 --rc genhtml_branch_coverage=1 00:05:38.274 --rc genhtml_function_coverage=1 00:05:38.274 --rc genhtml_legend=1 00:05:38.274 --rc geninfo_all_blocks=1 00:05:38.274 --rc geninfo_unexecuted_blocks=1 00:05:38.274 00:05:38.274 ' 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.274 --rc genhtml_branch_coverage=1 00:05:38.274 --rc genhtml_function_coverage=1 00:05:38.274 --rc genhtml_legend=1 00:05:38.274 --rc geninfo_all_blocks=1 00:05:38.274 --rc geninfo_unexecuted_blocks=1 00:05:38.274 00:05:38.274 ' 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.274 --rc genhtml_branch_coverage=1 00:05:38.274 --rc genhtml_function_coverage=1 00:05:38.274 --rc genhtml_legend=1 00:05:38.274 --rc geninfo_all_blocks=1 00:05:38.274 --rc geninfo_unexecuted_blocks=1 00:05:38.274 00:05:38.274 ' 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.274 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:38.275 16:48:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:46.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:46.410 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:46.410 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:46.410 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:46.410 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:46.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:05:46.411 00:05:46.411 --- 10.0.0.2 ping statistics --- 00:05:46.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.411 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:05:46.411 00:05:46.411 --- 10.0.0.1 ping statistics --- 00:05:46.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.411 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1742898 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1742898 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1742898 ']' 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.411 16:48:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.411 [2024-11-20 16:48:37.827883] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:46.411 [2024-11-20 16:48:37.827950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:46.411 [2024-11-20 16:48:37.930571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.411 [2024-11-20 16:48:37.984684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:46.411 [2024-11-20 16:48:37.984739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:46.411 [2024-11-20 16:48:37.984748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.411 [2024-11-20 16:48:37.984756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.411 [2024-11-20 16:48:37.984762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:46.411 [2024-11-20 16:48:37.986820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.411 [2024-11-20 16:48:37.986984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.411 [2024-11-20 16:48:37.986985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 [2024-11-20 16:48:38.704071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 Malloc0 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 Delay0 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 [2024-11-20 16:48:38.787466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.672 16:48:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:46.932 [2024-11-20 16:48:38.897859] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:48.841 Initializing NVMe Controllers 00:05:48.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:48.841 controller IO queue size 128 less than required 00:05:48.841 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:48.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:48.841 Initialization complete. Launching workers. 00:05:48.841 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28304 00:05:48.841 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28365, failed to submit 62 00:05:48.841 success 28308, unsuccessful 57, failed 0 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:48.841 16:48:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:48.841 rmmod nvme_tcp 00:05:48.842 rmmod nvme_fabrics 00:05:48.842 rmmod nvme_keyring 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1742898 ']' 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1742898 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1742898 ']' 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1742898 00:05:48.842 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742898 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742898' 00:05:49.102 killing process with pid 1742898 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1742898 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1742898 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:49.102 16:48:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:51.646 00:05:51.646 real 0m13.280s 00:05:51.646 user 0m13.641s 00:05:51.646 sys 0m6.519s 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.646 ************************************ 00:05:51.646 END TEST nvmf_abort 00:05:51.646 ************************************ 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.646 ************************************ 00:05:51.646 START TEST nvmf_ns_hotplug_stress 00:05:51.646 ************************************ 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:51.646 * Looking for test storage... 00:05:51.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.646 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.647 --rc genhtml_branch_coverage=1 00:05:51.647 --rc genhtml_function_coverage=1 00:05:51.647 --rc genhtml_legend=1 00:05:51.647 --rc geninfo_all_blocks=1 00:05:51.647 --rc geninfo_unexecuted_blocks=1 00:05:51.647 00:05:51.647 ' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.647 --rc genhtml_branch_coverage=1 00:05:51.647 --rc genhtml_function_coverage=1 00:05:51.647 --rc genhtml_legend=1 00:05:51.647 --rc geninfo_all_blocks=1 00:05:51.647 --rc geninfo_unexecuted_blocks=1 00:05:51.647 00:05:51.647 ' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.647 --rc genhtml_branch_coverage=1 00:05:51.647 --rc genhtml_function_coverage=1 00:05:51.647 --rc genhtml_legend=1 00:05:51.647 --rc geninfo_all_blocks=1 00:05:51.647 --rc geninfo_unexecuted_blocks=1 00:05:51.647 00:05:51.647 ' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.647 --rc genhtml_branch_coverage=1 00:05:51.647 --rc genhtml_function_coverage=1 00:05:51.647 --rc genhtml_legend=1 00:05:51.647 --rc geninfo_all_blocks=1 00:05:51.647 --rc geninfo_unexecuted_blocks=1 00:05:51.647 00:05:51.647 ' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.647 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.648 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.648 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.648 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.648 16:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:59.783 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:59.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:59.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:59.784 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:59.784 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:59.784 16:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:59.784 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:59.784 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:59.784 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:59.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:59.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:05:59.784 00:05:59.784 --- 10.0.0.2 ping statistics --- 00:05:59.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:59.784 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:05:59.784 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:59.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:59.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:05:59.784 00:05:59.784 --- 10.0.0.1 ping statistics --- 00:05:59.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:59.784 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1747825 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1747825 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1747825 ']' 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.785 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:59.785 [2024-11-20 16:48:51.141227] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:05:59.785 [2024-11-20 16:48:51.141296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.785 [2024-11-20 16:48:51.243803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.785 [2024-11-20 16:48:51.294971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:59.785 [2024-11-20 16:48:51.295026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:59.785 [2024-11-20 16:48:51.295034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.785 [2024-11-20 16:48:51.295042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.785 [2024-11-20 16:48:51.295049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:59.785 [2024-11-20 16:48:51.296893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.785 [2024-11-20 16:48:51.297054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.785 [2024-11-20 16:48:51.297055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.046 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.046 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:00.046 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.046 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.046 16:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.047 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.047 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:00.047 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:00.047 [2024-11-20 16:48:52.182072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.047 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.308 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.568 [2024-11-20 16:48:52.589243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.568 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:00.829 16:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:00.829 Malloc0 00:06:01.090 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:01.090 Delay0 00:06:01.090 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.350 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:01.611 NULL1 00:06:01.611 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:01.870 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1748489 00:06:01.870 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:01.870 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:01.871 16:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.871 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.130 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:02.130 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:02.390 true 00:06:02.390 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:02.390 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.649 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.649 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:02.649 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:02.909 true 00:06:02.909 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:02.909 16:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.169 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.169 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:03.169 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:03.429 true 00:06:03.429 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:03.429 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.689 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.689 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:03.689 16:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:03.948 true 00:06:03.948 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:03.948 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.208 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.208 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:04.208 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:04.467 true 00:06:04.468 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:04.468 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.727 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.987 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:04.988 16:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:04.988 true 00:06:04.988 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:04.988 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.247 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.507 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:05.507 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:05.507 true 00:06:05.507 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:05.507 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.766 16:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.026 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:06.026 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:06.026 true 00:06:06.026 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:06.026 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.286 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.547 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:06.547 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:06.547 true 00:06:06.807 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:06.807 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.807 16:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.067 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:07.067 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:07.325 true 00:06:07.325 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:07.325 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.325 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.584 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:07.584 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:07.846 true 00:06:07.846 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:07.846 16:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.846 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.106 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:08.106 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:08.366 true 00:06:08.366 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:08.366 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.625 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.625 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:08.625 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:08.885 true 00:06:08.885 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:08.886 16:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.147 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.147 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:09.147 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:09.407 true 00:06:09.407 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:09.407 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.667 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.928 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:09.928 16:49:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:09.928 true 00:06:09.928 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:09.928 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.187 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.448 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:10.448 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:10.448 true 00:06:10.448 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:10.448 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.710 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.970 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:10.970 16:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:10.970 true 00:06:11.230 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:11.230 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.230 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.491 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:11.491 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:11.752 true 00:06:11.752 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:11.752 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.752 16:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.011 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:12.011 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:12.272 true 00:06:12.272 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:12.272 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.272 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.533 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:12.533 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:12.794 true 00:06:12.794 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:12.794 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.054 16:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.054 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:13.054 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:13.315 true 00:06:13.315 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:13.315 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.575 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.575 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:13.575 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:13.835 true 00:06:13.835 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:13.835 16:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.096 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.096 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:14.096 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:14.355 true 00:06:14.355 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:14.355 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.615 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.615 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:14.615 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:14.875 true 00:06:14.875 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:14.875 16:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.134 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.395 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:15.395 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:15.395 true 00:06:15.395 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:15.395 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.655 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.915 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:15.915 16:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:15.915 true 00:06:15.915 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:15.915 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.174 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.434 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:16.434 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:16.434 true 00:06:16.434 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:16.434 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.694 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.954 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:16.954 16:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:17.226 true 00:06:17.226 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:17.226 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.226 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.487 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:17.487 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:17.797 true 00:06:17.797 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:17.797 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.797 16:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.133 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:18.133 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:18.133 true 00:06:18.133 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:18.133 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.393 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.654 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:18.654 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:18.654 true 00:06:18.654 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:18.654 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.914 16:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.174 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:19.174 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:19.174 true 00:06:19.434 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:19.434 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.434 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.694 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:19.694 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:19.694 true 00:06:19.955 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:19.955 16:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.955 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.215 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:20.215 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:20.475 true 00:06:20.475 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:20.475 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.476 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.735 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:20.736 16:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:20.995 true 00:06:20.995 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:20.995 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.255 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.255 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:21.255 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:21.516 true 00:06:21.516 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:21.516 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.776 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.776 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:21.776 16:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:22.035 true 00:06:22.035 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:22.035 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.295 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.555 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:22.555 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:22.555 true 00:06:22.555 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:22.555 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.814 16:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.074 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:23.074 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:23.074 true 00:06:23.074 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:23.074 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.335 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.595 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:23.595 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:23.595 true 00:06:23.595 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:23.595 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.855 16:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.114 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:24.114 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:24.114 true 00:06:24.374 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:24.375 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.375 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.635 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:24.635 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:24.895 true 00:06:24.895 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:24.895 16:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.895 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.156 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:25.156 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:25.415 true 00:06:25.415 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:25.415 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.415 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.675 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:25.675 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:25.935 true 00:06:25.935 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:25.935 16:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.194 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.194 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:26.194 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:26.454 true 00:06:26.454 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:26.454 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.714 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.714 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:26.714 16:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:26.974 true 00:06:26.974 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:26.974 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.250 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.250 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:27.250 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:27.517 true 00:06:27.517 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:27.517 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.778 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.778 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:27.778 16:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:28.038 true 00:06:28.038 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:28.038 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.299 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.560 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:28.560 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:28.560 true 00:06:28.560 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:28.560 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.821 16:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.081 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:29.081 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:29.081 true 00:06:29.081 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:29.081 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.340 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.601 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:29.601 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:29.601 true 00:06:29.601 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:29.601 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.862 16:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.122 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:30.122 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:30.122 true 00:06:30.382 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:30.382 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.382 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.642 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:30.642 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:30.903 true 00:06:30.904 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:30.904 16:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.904 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.164 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:31.164 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:31.424 true 00:06:31.424 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:31.424 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.424 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.685 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:31.685 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:31.946 true 00:06:31.946 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:31.946 16:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.946 Initializing NVMe Controllers 00:06:31.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.946 Controller IO queue size 128, less than required. 00:06:31.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:31.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:31.946 Initialization complete. Launching workers. 00:06:31.946 ======================================================== 00:06:31.946 Latency(us) 00:06:31.946 Device Information : IOPS MiB/s Average min max 00:06:31.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30914.56 15.09 4140.36 1111.33 8049.38 00:06:31.946 ======================================================== 00:06:31.946 Total : 30914.56 15.09 4140.36 1111.33 8049.38 00:06:31.946 00:06:32.207 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.207 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:32.207 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:32.468 true 00:06:32.468 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1748489 00:06:32.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1748489) - No such process 00:06:32.468 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1748489 00:06:32.468 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.728 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.728 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:32.728 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:32.728 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:32.728 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:32.728 16:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:32.989 null0 00:06:32.989 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:32.989 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:32.989 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:33.250 null1 00:06:33.250 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:33.250 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:33.250 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:33.250 null2 00:06:33.250 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:33.250 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:33.250 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:33.510 null3 00:06:33.510 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:33.510 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:33.510 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:33.771 null4 00:06:33.771 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:33.771 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:33.771 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:34.031 null5 00:06:34.031 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.031 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.031 16:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:34.031 null6 00:06:34.031 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.031 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.031 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:34.292 null7 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:34.292 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1755077 1755078 1755080 1755082 1755084 1755086 1755088 1755090 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.293 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:34.554 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:34.554 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.554 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:34.554 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:34.554 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.555 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.816 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:35.076 16:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.076 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.077 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.336 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:35.596 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.857 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.857 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.857 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:35.857 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.857 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:35.858 16:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:35.858 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.119 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.119 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.119 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.120 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.381 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.642 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.904 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.904 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.904 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.904 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.166 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.167 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.167 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.167 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.167 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.428 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.690 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.950 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.951 16:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.951 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.951 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.951 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.951 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.951 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.213 rmmod nvme_tcp 00:06:38.213 rmmod nvme_fabrics 00:06:38.213 rmmod nvme_keyring 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:38.213 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1747825 ']' 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1747825 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1747825 ']' 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1747825 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747825 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747825' 00:06:38.214 killing process with pid 1747825 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1747825 00:06:38.214 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1747825 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.474 16:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.381 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.381 00:06:40.381 real 0m49.188s 00:06:40.381 user 3m20.389s 00:06:40.381 sys 0m17.588s 00:06:40.381 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.381 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:40.381 ************************************ 00:06:40.381 END TEST nvmf_ns_hotplug_stress 00:06:40.381 ************************************ 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.642 ************************************ 00:06:40.642 START TEST nvmf_delete_subsystem 00:06:40.642 ************************************ 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:40.642 * Looking for test storage... 00:06:40.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.642 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.903 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.903 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.904 --rc genhtml_branch_coverage=1 00:06:40.904 --rc genhtml_function_coverage=1 00:06:40.904 --rc genhtml_legend=1 00:06:40.904 --rc geninfo_all_blocks=1 00:06:40.904 --rc geninfo_unexecuted_blocks=1 00:06:40.904 00:06:40.904 ' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.904 --rc genhtml_branch_coverage=1 00:06:40.904 --rc genhtml_function_coverage=1 00:06:40.904 --rc genhtml_legend=1 00:06:40.904 --rc geninfo_all_blocks=1 00:06:40.904 --rc geninfo_unexecuted_blocks=1 00:06:40.904 00:06:40.904 ' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.904 --rc genhtml_branch_coverage=1 00:06:40.904 --rc genhtml_function_coverage=1 00:06:40.904 --rc genhtml_legend=1 00:06:40.904 --rc geninfo_all_blocks=1 00:06:40.904 --rc geninfo_unexecuted_blocks=1 00:06:40.904 00:06:40.904 ' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.904 --rc genhtml_branch_coverage=1 00:06:40.904 --rc genhtml_function_coverage=1 00:06:40.904 --rc genhtml_legend=1 00:06:40.904 --rc geninfo_all_blocks=1 00:06:40.904 --rc geninfo_unexecuted_blocks=1 00:06:40.904 00:06:40.904 ' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:40.904 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.905 16:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:49.043 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:49.043 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:49.043 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:49.043 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:49.043 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:49.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:06:49.044 00:06:49.044 --- 10.0.0.2 ping statistics --- 00:06:49.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.044 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:06:49.044 00:06:49.044 --- 10.0.0.1 ping statistics --- 00:06:49.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.044 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1760257 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1760257 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1760257 ']' 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.044 16:49:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.044 [2024-11-20 16:49:40.450540] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:06:49.044 [2024-11-20 16:49:40.450611] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.044 [2024-11-20 16:49:40.551983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.044 [2024-11-20 16:49:40.602831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.044 [2024-11-20 16:49:40.602883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.044 [2024-11-20 16:49:40.602898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.044 [2024-11-20 16:49:40.602906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.044 [2024-11-20 16:49:40.602912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.044 [2024-11-20 16:49:40.604609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.044 [2024-11-20 16:49:40.604614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 [2024-11-20 16:49:41.305021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 [2024-11-20 16:49:41.329318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 NULL1 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 Delay0 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1760503 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:49.305 16:49:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:49.305 [2024-11-20 16:49:41.466319] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:51.219 16:49:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:51.219 16:49:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.219 16:49:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 starting I/O failed: -6 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 [2024-11-20 16:49:43.556428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc3860 is same with the state(6) to be set 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.481 Read completed with error (sct=0, sc=8) 00:06:51.481 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 Write completed with error (sct=0, sc=8) 00:06:51.482 Read completed with error (sct=0, sc=8) 00:06:51.482 starting I/O failed: -6 00:06:51.482 starting I/O failed: -6 00:06:51.482 starting I/O failed: -6 00:06:51.482 starting I/O failed: -6 00:06:51.482 starting I/O failed: -6 00:06:51.482 starting I/O failed: -6 00:06:52.424 [2024-11-20 16:49:44.523319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc49a0 is same with the state(6) to be set 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 [2024-11-20 16:49:44.557164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc32c0 is same with the state(6) to be set 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 [2024-11-20 16:49:44.557267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc3680 is same with the state(6) to be set 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 [2024-11-20 16:49:44.558694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f172400d020 is same with the state(6) to be set 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Write completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 Read completed with error (sct=0, sc=8) 00:06:52.424 [2024-11-20 16:49:44.559098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f172400d7c0 is same with the state(6) to be set 00:06:52.424 Initializing NVMe Controllers 00:06:52.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:52.424 Controller IO queue size 128, less than required. 00:06:52.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:52.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:52.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:52.425 Initialization complete. Launching workers. 00:06:52.425 ======================================================== 00:06:52.425 Latency(us) 00:06:52.425 Device Information : IOPS MiB/s Average min max 00:06:52.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.79 0.09 885432.36 500.86 1011297.82 00:06:52.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 187.70 0.09 901368.14 535.62 1013089.77 00:06:52.425 ======================================================== 00:06:52.425 Total : 362.49 0.18 893684.04 500.86 1013089.77 00:06:52.425 00:06:52.425 [2024-11-20 16:49:44.559617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc49a0 (9): Bad file descriptor 00:06:52.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:52.425 16:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.425 16:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:52.425 16:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1760503 00:06:52.425 16:49:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:52.995 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:52.995 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1760503 00:06:52.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1760503) - No such process 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1760503 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1760503 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1760503 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.996 [2024-11-20 16:49:45.089419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1761294 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:52.996 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:53.256 [2024-11-20 16:49:45.187544] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:53.516 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:53.516 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:53.516 16:49:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:54.085 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.085 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:54.085 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:54.654 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.654 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:54.654 16:49:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.223 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.223 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:55.223 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.484 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.484 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:55.484 16:49:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.055 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.055 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:56.055 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.316 Initializing NVMe Controllers 00:06:56.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.316 Controller IO queue size 128, less than required. 00:06:56.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:56.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:56.316 Initialization complete. Launching workers. 00:06:56.316 ======================================================== 00:06:56.316 Latency(us) 00:06:56.316 Device Information : IOPS MiB/s Average min max 00:06:56.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001638.28 1000142.11 1004378.51 00:06:56.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004072.45 1000282.53 1041797.19 00:06:56.316 ======================================================== 00:06:56.316 Total : 256.00 0.12 1002855.36 1000142.11 1041797.19 00:06:56.316 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1761294 00:06:56.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1761294) - No such process 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1761294 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:56.576 rmmod nvme_tcp 00:06:56.576 rmmod nvme_fabrics 00:06:56.576 rmmod nvme_keyring 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1760257 ']' 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1760257 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1760257 ']' 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1760257 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.576 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760257 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760257' 00:06:56.836 killing process with pid 1760257 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1760257 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1760257 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.836 16:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.381 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:59.381 00:06:59.381 real 0m18.324s 00:06:59.381 user 0m30.842s 00:06:59.381 sys 0m6.818s 00:06:59.381 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.381 16:49:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.381 ************************************ 00:06:59.381 END TEST nvmf_delete_subsystem 00:06:59.381 ************************************ 00:06:59.381 16:49:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:59.381 16:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.381 ************************************ 00:06:59.381 START TEST nvmf_host_management 00:06:59.381 ************************************ 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:59.381 * Looking for test storage... 00:06:59.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.381 --rc genhtml_branch_coverage=1 00:06:59.381 --rc genhtml_function_coverage=1 00:06:59.381 --rc genhtml_legend=1 00:06:59.381 --rc geninfo_all_blocks=1 00:06:59.381 --rc geninfo_unexecuted_blocks=1 00:06:59.381 00:06:59.381 ' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.381 --rc genhtml_branch_coverage=1 00:06:59.381 --rc genhtml_function_coverage=1 00:06:59.381 --rc genhtml_legend=1 00:06:59.381 --rc geninfo_all_blocks=1 00:06:59.381 --rc geninfo_unexecuted_blocks=1 00:06:59.381 00:06:59.381 ' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.381 --rc genhtml_branch_coverage=1 00:06:59.381 --rc genhtml_function_coverage=1 00:06:59.381 --rc genhtml_legend=1 00:06:59.381 --rc geninfo_all_blocks=1 00:06:59.381 --rc geninfo_unexecuted_blocks=1 00:06:59.381 00:06:59.381 ' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.381 --rc genhtml_branch_coverage=1 00:06:59.381 --rc genhtml_function_coverage=1 00:06:59.381 --rc genhtml_legend=1 00:06:59.381 --rc geninfo_all_blocks=1 00:06:59.381 --rc geninfo_unexecuted_blocks=1 00:06:59.381 00:06:59.381 ' 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.381 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:59.382 16:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:07.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:07.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:07.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:07.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:07.534 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:07.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:07:07.534 00:07:07.534 --- 10.0.0.2 ping statistics --- 00:07:07.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.535 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:07:07.535 00:07:07.535 --- 10.0.0.1 ping statistics --- 00:07:07.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.535 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1766279 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1766279 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1766279 ']' 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.535 16:49:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:07.535 [2024-11-20 16:49:58.889369] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:07:07.535 [2024-11-20 16:49:58.889438] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.535 [2024-11-20 16:49:58.991234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.535 [2024-11-20 16:49:59.044342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.535 [2024-11-20 16:49:59.044395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.535 [2024-11-20 16:49:59.044407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.535 [2024-11-20 16:49:59.044414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.535 [2024-11-20 16:49:59.044420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.535 [2024-11-20 16:49:59.046575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.535 [2024-11-20 16:49:59.046737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.535 [2024-11-20 16:49:59.046899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.535 [2024-11-20 16:49:59.046900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 [2024-11-20 16:49:59.755567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 Malloc0 00:07:07.887 [2024-11-20 16:49:59.835437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1766367 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1766367 /var/tmp/bdevperf.sock 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1766367 ']' 00:07:07.887 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:07.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:07.888 { 00:07:07.888 "params": { 00:07:07.888 "name": "Nvme$subsystem", 00:07:07.888 "trtype": "$TEST_TRANSPORT", 00:07:07.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:07.888 "adrfam": "ipv4", 00:07:07.888 "trsvcid": "$NVMF_PORT", 00:07:07.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:07.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:07.888 "hdgst": ${hdgst:-false}, 00:07:07.888 "ddgst": ${ddgst:-false} 00:07:07.888 }, 00:07:07.888 "method": "bdev_nvme_attach_controller" 00:07:07.888 } 00:07:07.888 EOF 00:07:07.888 )") 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:07.888 16:49:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:07.888 "params": { 00:07:07.888 "name": "Nvme0", 00:07:07.888 "trtype": "tcp", 00:07:07.888 "traddr": "10.0.0.2", 00:07:07.888 "adrfam": "ipv4", 00:07:07.888 "trsvcid": "4420", 00:07:07.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:07.888 "hdgst": false, 00:07:07.888 "ddgst": false 00:07:07.888 }, 00:07:07.888 "method": "bdev_nvme_attach_controller" 00:07:07.888 }' 00:07:07.888 [2024-11-20 16:49:59.946775] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:07:07.888 [2024-11-20 16:49:59.946841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766367 ] 00:07:07.888 [2024-11-20 16:50:00.043428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.200 [2024-11-20 16:50:00.098403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.200 Running I/O for 10 seconds... 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.774 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.774 [2024-11-20 16:50:00.867273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.774 [2024-11-20 16:50:00.867648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.867867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042150 is same with the state(6) to be set 00:07:08.775 [2024-11-20 16:50:00.868351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.775 [2024-11-20 16:50:00.868739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.775 [2024-11-20 16:50:00.868749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.868989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.868999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.776 [2024-11-20 16:50:00.869396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.776 [2024-11-20 16:50:00.869406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.777 [2024-11-20 16:50:00.869592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86f30 is same with the state(6) to be set 00:07:08.777 [2024-11-20 16:50:00.869730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.777 [2024-11-20 16:50:00.869743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.777 [2024-11-20 16:50:00.869762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.777 [2024-11-20 16:50:00.869778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:08.777 [2024-11-20 16:50:00.869795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.869802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6e000 is same with the state(6) to be set 00:07:08.777 [2024-11-20 16:50:00.871032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:08.777 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.777 task offset: 98304 on job bdev=Nvme0n1 fails 00:07:08.777 00:07:08.777 Latency(us) 00:07:08.777 [2024-11-20T15:50:00.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.777 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.777 Job: Nvme0n1 ended in about 0.56 seconds with error 00:07:08.777 Verification LBA range: start 0x0 length 0x400 00:07:08.777 Nvme0n1 : 0.56 1382.31 86.39 115.19 0.00 41678.75 5870.93 36263.25 00:07:08.777 [2024-11-20T15:50:00.953Z] =================================================================================================================== 00:07:08.777 [2024-11-20T15:50:00.953Z] Total : 1382.31 86.39 115.19 0.00 41678.75 5870.93 36263.25 00:07:08.777 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.777 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.777 [2024-11-20 16:50:00.873257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.777 [2024-11-20 16:50:00.873298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6e000 (9): Bad file descriptor 00:07:08.777 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 [2024-11-20 16:50:00.878836] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:08.777 [2024-11-20 16:50:00.878932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:08.777 [2024-11-20 16:50:00.878961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.777 [2024-11-20 16:50:00.878979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:08.777 [2024-11-20 16:50:00.878988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:08.777 [2024-11-20 16:50:00.878997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:08.777 [2024-11-20 16:50:00.879004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b6e000 00:07:08.777 [2024-11-20 16:50:00.879034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6e000 (9): Bad file descriptor 00:07:08.777 [2024-11-20 16:50:00.879049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:08.777 [2024-11-20 16:50:00.879058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:08.777 [2024-11-20 16:50:00.879071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:08.777 [2024-11-20 16:50:00.879083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:08.777 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.777 16:50:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:09.717 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1766367 00:07:09.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1766367) - No such process 00:07:09.717 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:09.717 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:09.978 { 00:07:09.978 "params": { 00:07:09.978 "name": "Nvme$subsystem", 00:07:09.978 "trtype": "$TEST_TRANSPORT", 00:07:09.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.978 "adrfam": "ipv4", 00:07:09.978 "trsvcid": "$NVMF_PORT", 00:07:09.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.978 "hdgst": ${hdgst:-false}, 00:07:09.978 "ddgst": ${ddgst:-false} 00:07:09.978 }, 00:07:09.978 "method": "bdev_nvme_attach_controller" 00:07:09.978 } 00:07:09.978 EOF 00:07:09.978 )") 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:09.978 16:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:09.978 "params": { 00:07:09.978 "name": "Nvme0", 00:07:09.978 "trtype": "tcp", 00:07:09.978 "traddr": "10.0.0.2", 00:07:09.978 "adrfam": "ipv4", 00:07:09.978 "trsvcid": "4420", 00:07:09.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.978 "hdgst": false, 00:07:09.978 "ddgst": false 00:07:09.978 }, 00:07:09.978 "method": "bdev_nvme_attach_controller" 00:07:09.978 }' 00:07:09.978 [2024-11-20 16:50:01.944133] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:07:09.978 [2024-11-20 16:50:01.944195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766820 ] 00:07:09.978 [2024-11-20 16:50:02.033590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.978 [2024-11-20 16:50:02.068957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.239 Running I/O for 1 seconds... 00:07:11.622 1750.00 IOPS, 109.38 MiB/s 00:07:11.622 Latency(us) 00:07:11.622 [2024-11-20T15:50:03.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.622 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:11.622 Verification LBA range: start 0x0 length 0x400 00:07:11.622 Nvme0n1 : 1.03 1800.76 112.55 0.00 0.00 34846.73 2007.04 32768.00 00:07:11.622 [2024-11-20T15:50:03.798Z] =================================================================================================================== 00:07:11.622 [2024-11-20T15:50:03.798Z] Total : 1800.76 112.55 0.00 0.00 34846.73 2007.04 32768.00 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.622 rmmod nvme_tcp 00:07:11.622 rmmod nvme_fabrics 00:07:11.622 rmmod nvme_keyring 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1766279 ']' 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1766279 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1766279 ']' 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1766279 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1766279 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1766279' 00:07:11.622 killing process with pid 1766279 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1766279 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1766279 00:07:11.622 [2024-11-20 16:50:03.760871] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.622 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.623 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.623 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.623 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.623 16:50:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:14.167 00:07:14.167 real 0m14.826s 00:07:14.167 user 0m23.756s 00:07:14.167 sys 0m6.859s 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.167 ************************************ 00:07:14.167 END TEST nvmf_host_management 00:07:14.167 ************************************ 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.167 ************************************ 00:07:14.167 START TEST nvmf_lvol 00:07:14.167 ************************************ 00:07:14.167 16:50:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.167 * Looking for test storage... 00:07:14.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.167 --rc genhtml_branch_coverage=1 00:07:14.167 --rc genhtml_function_coverage=1 00:07:14.167 --rc genhtml_legend=1 00:07:14.167 --rc geninfo_all_blocks=1 00:07:14.167 --rc geninfo_unexecuted_blocks=1 00:07:14.167 00:07:14.167 ' 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.167 --rc genhtml_branch_coverage=1 00:07:14.167 --rc genhtml_function_coverage=1 00:07:14.167 --rc genhtml_legend=1 00:07:14.167 --rc geninfo_all_blocks=1 00:07:14.167 --rc geninfo_unexecuted_blocks=1 00:07:14.167 00:07:14.167 ' 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.167 --rc genhtml_branch_coverage=1 00:07:14.167 --rc genhtml_function_coverage=1 00:07:14.167 --rc genhtml_legend=1 00:07:14.167 --rc geninfo_all_blocks=1 00:07:14.167 --rc geninfo_unexecuted_blocks=1 00:07:14.167 00:07:14.167 ' 00:07:14.167 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.167 --rc genhtml_branch_coverage=1 00:07:14.167 --rc genhtml_function_coverage=1 00:07:14.167 --rc genhtml_legend=1 00:07:14.167 --rc geninfo_all_blocks=1 00:07:14.167 --rc geninfo_unexecuted_blocks=1 00:07:14.167 00:07:14.167 ' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.168 16:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:22.311 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:22.311 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:22.311 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:22.311 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.311 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:22.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:07:22.312 00:07:22.312 --- 10.0.0.2 ping statistics --- 00:07:22.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.312 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:07:22.312 00:07:22.312 --- 10.0.0.1 ping statistics --- 00:07:22.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.312 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1771411 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1771411 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1771411 ']' 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.312 16:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.312 [2024-11-20 16:50:13.695726] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:07:22.312 [2024-11-20 16:50:13.695790] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.312 [2024-11-20 16:50:13.798522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.312 [2024-11-20 16:50:13.851885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.312 [2024-11-20 16:50:13.851937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.312 [2024-11-20 16:50:13.851952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.312 [2024-11-20 16:50:13.851960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.312 [2024-11-20 16:50:13.851966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.312 [2024-11-20 16:50:13.853942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.312 [2024-11-20 16:50:13.854103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.312 [2024-11-20 16:50:13.854104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.573 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:22.574 [2024-11-20 16:50:14.740004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.834 16:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:22.834 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:22.834 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:23.094 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:23.094 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:23.355 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:23.617 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=027e3d82-c7c1-4cac-b79c-a15687de5a10 00:07:23.617 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 027e3d82-c7c1-4cac-b79c-a15687de5a10 lvol 20 00:07:23.878 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b50f3aab-6650-4623-b1b9-494569805b3b 00:07:23.878 16:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.878 16:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b50f3aab-6650-4623-b1b9-494569805b3b 00:07:24.143 16:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:24.405 [2024-11-20 16:50:16.399288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.405 16:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.666 16:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1772117 00:07:24.666 16:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:24.666 16:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:25.606 16:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b50f3aab-6650-4623-b1b9-494569805b3b MY_SNAPSHOT 00:07:25.867 16:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0afdcf86-6aff-4fcf-b1e8-0c5b7f7fddb6 00:07:25.867 16:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b50f3aab-6650-4623-b1b9-494569805b3b 30 00:07:26.127 16:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0afdcf86-6aff-4fcf-b1e8-0c5b7f7fddb6 MY_CLONE 00:07:26.127 16:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f44cfca3-c056-472f-b49a-f67f0251dcef 00:07:26.127 16:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f44cfca3-c056-472f-b49a-f67f0251dcef 00:07:26.700 16:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1772117 00:07:36.694 Initializing NVMe Controllers 00:07:36.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:36.694 Controller IO queue size 128, less than required. 00:07:36.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:36.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:36.694 Initialization complete. Launching workers. 00:07:36.695 ======================================================== 00:07:36.695 Latency(us) 00:07:36.695 Device Information : IOPS MiB/s Average min max 00:07:36.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17103.00 66.81 7486.51 725.00 41028.21 00:07:36.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16006.40 62.52 7997.99 4174.40 46727.75 00:07:36.695 ======================================================== 00:07:36.695 Total : 33109.40 129.33 7733.78 725.00 46727.75 00:07:36.695 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b50f3aab-6650-4623-b1b9-494569805b3b 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 027e3d82-c7c1-4cac-b79c-a15687de5a10 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.695 rmmod nvme_tcp 00:07:36.695 rmmod nvme_fabrics 00:07:36.695 rmmod nvme_keyring 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1771411 ']' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1771411 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1771411 ']' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1771411 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1771411 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1771411' 00:07:36.695 killing process with pid 1771411 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1771411 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1771411 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.695 16:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.079 16:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.079 00:07:38.079 real 0m24.014s 00:07:38.079 user 1m5.329s 00:07:38.079 sys 0m8.644s 00:07:38.079 16:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.079 16:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.079 ************************************ 00:07:38.079 END TEST nvmf_lvol 00:07:38.079 ************************************ 00:07:38.079 16:50:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:38.079 16:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.079 ************************************ 00:07:38.079 START TEST nvmf_lvs_grow 00:07:38.079 ************************************ 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:38.079 * Looking for test storage... 00:07:38.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.079 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.080 --rc genhtml_branch_coverage=1 00:07:38.080 --rc genhtml_function_coverage=1 00:07:38.080 --rc genhtml_legend=1 00:07:38.080 --rc geninfo_all_blocks=1 00:07:38.080 --rc geninfo_unexecuted_blocks=1 00:07:38.080 00:07:38.080 ' 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.080 --rc genhtml_branch_coverage=1 00:07:38.080 --rc genhtml_function_coverage=1 00:07:38.080 --rc genhtml_legend=1 00:07:38.080 --rc geninfo_all_blocks=1 00:07:38.080 --rc geninfo_unexecuted_blocks=1 00:07:38.080 00:07:38.080 ' 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.080 --rc genhtml_branch_coverage=1 00:07:38.080 --rc genhtml_function_coverage=1 00:07:38.080 --rc genhtml_legend=1 00:07:38.080 --rc geninfo_all_blocks=1 00:07:38.080 --rc geninfo_unexecuted_blocks=1 00:07:38.080 00:07:38.080 ' 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.080 --rc genhtml_branch_coverage=1 00:07:38.080 --rc genhtml_function_coverage=1 00:07:38.080 --rc genhtml_legend=1 00:07:38.080 --rc geninfo_all_blocks=1 00:07:38.080 --rc geninfo_unexecuted_blocks=1 00:07:38.080 00:07:38.080 ' 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.080 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.341 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.342 16:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.473 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:46.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:46.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:46.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:46.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:46.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:07:46.474 00:07:46.474 --- 10.0.0.2 ping statistics --- 00:07:46.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.474 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:46.474 00:07:46.474 --- 10.0.0.1 ping statistics --- 00:07:46.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.474 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1778487 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1778487 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1778487 ']' 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.474 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.475 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.475 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.475 16:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.475 [2024-11-20 16:50:37.826353] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:07:46.475 [2024-11-20 16:50:37.826418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.475 [2024-11-20 16:50:37.928263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.475 [2024-11-20 16:50:37.979211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.475 [2024-11-20 16:50:37.979265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.475 [2024-11-20 16:50:37.979274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.475 [2024-11-20 16:50:37.979281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.475 [2024-11-20 16:50:37.979287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.475 [2024-11-20 16:50:37.980107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.475 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.475 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:46.475 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.475 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.475 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.735 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.735 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:46.735 [2024-11-20 16:50:38.845121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.735 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:46.735 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.735 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.735 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.995 ************************************ 00:07:46.995 START TEST lvs_grow_clean 00:07:46.995 ************************************ 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.995 16:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.995 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.995 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.255 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6dac99b6-d180-4948-aad5-33174703ab98 00:07:47.255 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:07:47.255 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:47.516 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:47.516 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:47.516 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6dac99b6-d180-4948-aad5-33174703ab98 lvol 150 00:07:47.776 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=392c5453-0ab0-432f-8935-81f419e0c955 00:07:47.776 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.776 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.776 [2024-11-20 16:50:39.856540] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.776 [2024-11-20 16:50:39.856608] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.776 true 00:07:47.776 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:07:47.776 16:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:48.037 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:48.037 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.297 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 392c5453-0ab0-432f-8935-81f419e0c955 00:07:48.297 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.557 [2024-11-20 16:50:40.586872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.557 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1779203 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1779203 /var/tmp/bdevperf.sock 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1779203 ']' 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.818 16:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:48.818 [2024-11-20 16:50:40.843226] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:07:48.818 [2024-11-20 16:50:40.843296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779203 ] 00:07:48.818 [2024-11-20 16:50:40.935781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.818 [2024-11-20 16:50:40.989482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.758 16:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.758 16:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:49.758 16:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:50.018 Nvme0n1 00:07:50.019 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.278 [ 00:07:50.278 { 00:07:50.278 "name": "Nvme0n1", 00:07:50.278 "aliases": [ 00:07:50.278 "392c5453-0ab0-432f-8935-81f419e0c955" 00:07:50.278 ], 00:07:50.278 "product_name": "NVMe disk", 00:07:50.278 "block_size": 4096, 00:07:50.278 "num_blocks": 38912, 00:07:50.278 "uuid": "392c5453-0ab0-432f-8935-81f419e0c955", 00:07:50.278 "numa_id": 0, 00:07:50.278 "assigned_rate_limits": { 00:07:50.278 "rw_ios_per_sec": 0, 00:07:50.278 "rw_mbytes_per_sec": 0, 00:07:50.278 "r_mbytes_per_sec": 0, 00:07:50.278 "w_mbytes_per_sec": 0 00:07:50.279 }, 00:07:50.279 "claimed": false, 00:07:50.279 "zoned": false, 00:07:50.279 "supported_io_types": { 00:07:50.279 "read": true, 00:07:50.279 "write": true, 00:07:50.279 "unmap": true, 00:07:50.279 "flush": true, 00:07:50.279 "reset": true, 00:07:50.279 "nvme_admin": true, 00:07:50.279 "nvme_io": true, 00:07:50.279 "nvme_io_md": false, 00:07:50.279 "write_zeroes": true, 00:07:50.279 "zcopy": false, 00:07:50.279 "get_zone_info": false, 00:07:50.279 "zone_management": false, 00:07:50.279 "zone_append": false, 00:07:50.279 "compare": true, 00:07:50.279 "compare_and_write": true, 00:07:50.279 "abort": true, 00:07:50.279 "seek_hole": false, 00:07:50.279 "seek_data": false, 00:07:50.279 "copy": true, 00:07:50.279 "nvme_iov_md": false 00:07:50.279 }, 00:07:50.279 "memory_domains": [ 00:07:50.279 { 00:07:50.279 "dma_device_id": "system", 00:07:50.279 "dma_device_type": 1 00:07:50.279 } 00:07:50.279 ], 00:07:50.279 "driver_specific": { 00:07:50.279 "nvme": [ 00:07:50.279 { 00:07:50.279 "trid": { 00:07:50.279 "trtype": "TCP", 00:07:50.279 "adrfam": "IPv4", 00:07:50.279 "traddr": "10.0.0.2", 00:07:50.279 "trsvcid": "4420", 00:07:50.279 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.279 }, 00:07:50.279 "ctrlr_data": { 00:07:50.279 "cntlid": 1, 00:07:50.279 "vendor_id": "0x8086", 00:07:50.279 "model_number": "SPDK bdev Controller", 00:07:50.279 "serial_number": "SPDK0", 00:07:50.279 "firmware_revision": "25.01", 00:07:50.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.279 "oacs": { 00:07:50.279 "security": 0, 00:07:50.279 "format": 0, 00:07:50.279 "firmware": 0, 00:07:50.279 "ns_manage": 0 00:07:50.279 }, 00:07:50.279 "multi_ctrlr": true, 00:07:50.279 "ana_reporting": false 00:07:50.279 }, 00:07:50.279 "vs": { 00:07:50.279 "nvme_version": "1.3" 00:07:50.279 }, 00:07:50.279 "ns_data": { 00:07:50.279 "id": 1, 00:07:50.279 "can_share": true 00:07:50.279 } 00:07:50.279 } 00:07:50.279 ], 00:07:50.279 "mp_policy": "active_passive" 00:07:50.279 } 00:07:50.279 } 00:07:50.279 ] 00:07:50.279 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1779537 00:07:50.279 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:50.279 16:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.279 Running I/O for 10 seconds... 00:07:51.227 Latency(us) 00:07:51.227 [2024-11-20T15:50:43.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.227 Nvme0n1 : 1.00 24944.00 97.44 0.00 0.00 0.00 0.00 0.00 00:07:51.227 [2024-11-20T15:50:43.403Z] =================================================================================================================== 00:07:51.227 [2024-11-20T15:50:43.403Z] Total : 24944.00 97.44 0.00 0.00 0.00 0.00 0.00 00:07:51.227 00:07:52.212 16:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6dac99b6-d180-4948-aad5-33174703ab98 00:07:52.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.212 Nvme0n1 : 2.00 25112.50 98.10 0.00 0.00 0.00 0.00 0.00 00:07:52.212 [2024-11-20T15:50:44.388Z] =================================================================================================================== 00:07:52.212 [2024-11-20T15:50:44.388Z] Total : 25112.50 98.10 0.00 0.00 0.00 0.00 0.00 00:07:52.212 00:07:52.472 true 00:07:52.472 16:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.472 16:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:07:52.732 16:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.732 16:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.732 16:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1779537 00:07:53.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.302 Nvme0n1 : 3.00 25204.67 98.46 0.00 0.00 0.00 0.00 0.00 00:07:53.302 [2024-11-20T15:50:45.478Z] =================================================================================================================== 00:07:53.302 [2024-11-20T15:50:45.478Z] Total : 25204.67 98.46 0.00 0.00 0.00 0.00 0.00 00:07:53.302 00:07:54.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.240 Nvme0n1 : 4.00 25259.25 98.67 0.00 0.00 0.00 0.00 0.00 00:07:54.240 [2024-11-20T15:50:46.416Z] =================================================================================================================== 00:07:54.240 [2024-11-20T15:50:46.416Z] Total : 25259.25 98.67 0.00 0.00 0.00 0.00 0.00 00:07:54.240 00:07:55.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.623 Nvme0n1 : 5.00 25301.20 98.83 0.00 0.00 0.00 0.00 0.00 00:07:55.623 [2024-11-20T15:50:47.799Z] =================================================================================================================== 00:07:55.623 [2024-11-20T15:50:47.799Z] Total : 25301.20 98.83 0.00 0.00 0.00 0.00 0.00 00:07:55.623 00:07:56.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.563 Nvme0n1 : 6.00 25329.33 98.94 0.00 0.00 0.00 0.00 0.00 00:07:56.563 [2024-11-20T15:50:48.739Z] =================================================================================================================== 00:07:56.563 [2024-11-20T15:50:48.739Z] Total : 25329.33 98.94 0.00 0.00 0.00 0.00 0.00 00:07:56.563 00:07:57.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.505 Nvme0n1 : 7.00 25349.57 99.02 0.00 0.00 0.00 0.00 0.00 00:07:57.505 [2024-11-20T15:50:49.681Z] =================================================================================================================== 00:07:57.505 [2024-11-20T15:50:49.681Z] Total : 25349.57 99.02 0.00 0.00 0.00 0.00 0.00 00:07:57.505 00:07:58.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.447 Nvme0n1 : 8.00 25364.62 99.08 0.00 0.00 0.00 0.00 0.00 00:07:58.447 [2024-11-20T15:50:50.623Z] =================================================================================================================== 00:07:58.447 [2024-11-20T15:50:50.623Z] Total : 25364.62 99.08 0.00 0.00 0.00 0.00 0.00 00:07:58.447 00:07:59.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.388 Nvme0n1 : 9.00 25383.67 99.15 0.00 0.00 0.00 0.00 0.00 00:07:59.388 [2024-11-20T15:50:51.564Z] =================================================================================================================== 00:07:59.388 [2024-11-20T15:50:51.565Z] Total : 25383.67 99.15 0.00 0.00 0.00 0.00 0.00 00:07:59.389 00:08:00.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.331 Nvme0n1 : 10.00 25392.50 99.19 0.00 0.00 0.00 0.00 0.00 00:08:00.331 [2024-11-20T15:50:52.507Z] =================================================================================================================== 00:08:00.331 [2024-11-20T15:50:52.507Z] Total : 25392.50 99.19 0.00 0.00 0.00 0.00 0.00 00:08:00.331 00:08:00.331 00:08:00.331 Latency(us) 00:08:00.331 [2024-11-20T15:50:52.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.331 Nvme0n1 : 10.00 25393.95 99.20 0.00 0.00 5037.13 2498.56 12451.84 00:08:00.331 [2024-11-20T15:50:52.507Z] =================================================================================================================== 00:08:00.331 [2024-11-20T15:50:52.507Z] Total : 25393.95 99.20 0.00 0.00 5037.13 2498.56 12451.84 00:08:00.331 { 00:08:00.331 "results": [ 00:08:00.331 { 00:08:00.331 "job": "Nvme0n1", 00:08:00.331 "core_mask": "0x2", 00:08:00.331 "workload": "randwrite", 00:08:00.331 "status": "finished", 00:08:00.331 "queue_depth": 128, 00:08:00.331 "io_size": 4096, 00:08:00.331 "runtime": 10.004469, 00:08:00.331 "iops": 25393.95144310008, 00:08:00.332 "mibps": 99.19512282460968, 00:08:00.332 "io_failed": 0, 00:08:00.332 "io_timeout": 0, 00:08:00.332 "avg_latency_us": 5037.132957296312, 00:08:00.332 "min_latency_us": 2498.56, 00:08:00.332 "max_latency_us": 12451.84 00:08:00.332 } 00:08:00.332 ], 00:08:00.332 "core_count": 1 00:08:00.332 } 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1779203 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1779203 ']' 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1779203 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1779203 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1779203' 00:08:00.332 killing process with pid 1779203 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1779203 00:08:00.332 Received shutdown signal, test time was about 10.000000 seconds 00:08:00.332 00:08:00.332 Latency(us) 00:08:00.332 [2024-11-20T15:50:52.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.332 [2024-11-20T15:50:52.508Z] =================================================================================================================== 00:08:00.332 [2024-11-20T15:50:52.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:00.332 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1779203 00:08:00.592 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.853 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.853 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:00.853 16:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:01.113 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:01.113 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:01.113 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.375 [2024-11-20 16:50:53.290744] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.375 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:01.376 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:01.376 request: 00:08:01.376 { 00:08:01.376 "uuid": "6dac99b6-d180-4948-aad5-33174703ab98", 00:08:01.376 "method": "bdev_lvol_get_lvstores", 00:08:01.376 "req_id": 1 00:08:01.376 } 00:08:01.376 Got JSON-RPC error response 00:08:01.376 response: 00:08:01.376 { 00:08:01.376 "code": -19, 00:08:01.376 "message": "No such device" 00:08:01.376 } 00:08:01.376 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:01.376 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.376 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.376 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.376 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.637 aio_bdev 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 392c5453-0ab0-432f-8935-81f419e0c955 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=392c5453-0ab0-432f-8935-81f419e0c955 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.637 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.898 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 392c5453-0ab0-432f-8935-81f419e0c955 -t 2000 00:08:01.898 [ 00:08:01.898 { 00:08:01.898 "name": "392c5453-0ab0-432f-8935-81f419e0c955", 00:08:01.898 "aliases": [ 00:08:01.898 "lvs/lvol" 00:08:01.898 ], 00:08:01.898 "product_name": "Logical Volume", 00:08:01.898 "block_size": 4096, 00:08:01.898 "num_blocks": 38912, 00:08:01.898 "uuid": "392c5453-0ab0-432f-8935-81f419e0c955", 00:08:01.898 "assigned_rate_limits": { 00:08:01.898 "rw_ios_per_sec": 0, 00:08:01.898 "rw_mbytes_per_sec": 0, 00:08:01.898 "r_mbytes_per_sec": 0, 00:08:01.898 "w_mbytes_per_sec": 0 00:08:01.898 }, 00:08:01.898 "claimed": false, 00:08:01.898 "zoned": false, 00:08:01.898 "supported_io_types": { 00:08:01.898 "read": true, 00:08:01.898 "write": true, 00:08:01.898 "unmap": true, 00:08:01.898 "flush": false, 00:08:01.898 "reset": true, 00:08:01.898 "nvme_admin": false, 00:08:01.898 "nvme_io": false, 00:08:01.898 "nvme_io_md": false, 00:08:01.898 "write_zeroes": true, 00:08:01.898 "zcopy": false, 00:08:01.898 "get_zone_info": false, 00:08:01.898 "zone_management": false, 00:08:01.898 "zone_append": false, 00:08:01.898 "compare": false, 00:08:01.898 "compare_and_write": false, 00:08:01.898 "abort": false, 00:08:01.898 "seek_hole": true, 00:08:01.898 "seek_data": true, 00:08:01.898 "copy": false, 00:08:01.898 "nvme_iov_md": false 00:08:01.898 }, 00:08:01.898 "driver_specific": { 00:08:01.898 "lvol": { 00:08:01.898 "lvol_store_uuid": "6dac99b6-d180-4948-aad5-33174703ab98", 00:08:01.898 "base_bdev": "aio_bdev", 00:08:01.898 "thin_provision": false, 00:08:01.898 "num_allocated_clusters": 38, 00:08:01.898 "snapshot": false, 00:08:01.898 "clone": false, 00:08:01.898 "esnap_clone": false 00:08:01.898 } 00:08:01.898 } 00:08:01.898 } 00:08:01.898 ] 00:08:01.898 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:01.898 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:01.898 16:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.158 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.158 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:02.158 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:02.418 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:02.418 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 392c5453-0ab0-432f-8935-81f419e0c955 00:08:02.418 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dac99b6-d180-4948-aad5-33174703ab98 00:08:02.678 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.678 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.939 00:08:02.939 real 0m15.951s 00:08:02.939 user 0m15.719s 00:08:02.939 sys 0m1.402s 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:02.939 ************************************ 00:08:02.939 END TEST lvs_grow_clean 00:08:02.939 ************************************ 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.939 ************************************ 00:08:02.939 START TEST lvs_grow_dirty 00:08:02.939 ************************************ 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.939 16:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.201 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.201 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:03.201 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:03.201 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:03.201 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.461 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.462 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.462 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 lvol 150 00:08:03.723 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:03.723 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.723 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:03.723 [2024-11-20 16:50:55.867427] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:03.723 [2024-11-20 16:50:55.867468] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:03.723 true 00:08:03.723 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:03.723 16:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:03.984 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:03.984 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.243 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:04.243 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.503 [2024-11-20 16:50:56.541381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.503 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1782306 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1782306 /var/tmp/bdevperf.sock 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1782306 ']' 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:04.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.763 16:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.763 [2024-11-20 16:50:56.756886] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:04.763 [2024-11-20 16:50:56.756940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1782306 ] 00:08:04.763 [2024-11-20 16:50:56.840868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.763 [2024-11-20 16:50:56.870756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.703 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.703 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:05.703 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.703 Nvme0n1 00:08:05.703 16:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:05.964 [ 00:08:05.964 { 00:08:05.964 "name": "Nvme0n1", 00:08:05.964 "aliases": [ 00:08:05.964 "e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d" 00:08:05.964 ], 00:08:05.964 "product_name": "NVMe disk", 00:08:05.964 "block_size": 4096, 00:08:05.964 "num_blocks": 38912, 00:08:05.964 "uuid": "e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d", 00:08:05.964 "numa_id": 0, 00:08:05.964 "assigned_rate_limits": { 00:08:05.964 "rw_ios_per_sec": 0, 00:08:05.964 "rw_mbytes_per_sec": 0, 00:08:05.964 "r_mbytes_per_sec": 0, 00:08:05.964 "w_mbytes_per_sec": 0 00:08:05.964 }, 00:08:05.964 "claimed": false, 00:08:05.964 "zoned": false, 00:08:05.964 "supported_io_types": { 00:08:05.964 "read": true, 00:08:05.964 "write": true, 00:08:05.964 "unmap": true, 00:08:05.964 "flush": true, 00:08:05.964 "reset": true, 00:08:05.964 "nvme_admin": true, 00:08:05.964 "nvme_io": true, 00:08:05.964 "nvme_io_md": false, 00:08:05.964 "write_zeroes": true, 00:08:05.964 "zcopy": false, 00:08:05.964 "get_zone_info": false, 00:08:05.964 "zone_management": false, 00:08:05.964 "zone_append": false, 00:08:05.964 "compare": true, 00:08:05.964 "compare_and_write": true, 00:08:05.964 "abort": true, 00:08:05.964 "seek_hole": false, 00:08:05.964 "seek_data": false, 00:08:05.964 "copy": true, 00:08:05.964 "nvme_iov_md": false 00:08:05.964 }, 00:08:05.964 "memory_domains": [ 00:08:05.964 { 00:08:05.964 "dma_device_id": "system", 00:08:05.964 "dma_device_type": 1 00:08:05.964 } 00:08:05.964 ], 00:08:05.964 "driver_specific": { 00:08:05.964 "nvme": [ 00:08:05.964 { 00:08:05.964 "trid": { 00:08:05.964 "trtype": "TCP", 00:08:05.964 "adrfam": "IPv4", 00:08:05.964 "traddr": "10.0.0.2", 00:08:05.964 "trsvcid": "4420", 00:08:05.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:05.964 }, 00:08:05.964 "ctrlr_data": { 00:08:05.964 "cntlid": 1, 00:08:05.964 "vendor_id": "0x8086", 00:08:05.964 "model_number": "SPDK bdev Controller", 00:08:05.964 "serial_number": "SPDK0", 00:08:05.964 "firmware_revision": "25.01", 00:08:05.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.964 "oacs": { 00:08:05.964 "security": 0, 00:08:05.964 "format": 0, 00:08:05.964 "firmware": 0, 00:08:05.964 "ns_manage": 0 00:08:05.964 }, 00:08:05.964 "multi_ctrlr": true, 00:08:05.964 "ana_reporting": false 00:08:05.964 }, 00:08:05.964 "vs": { 00:08:05.964 "nvme_version": "1.3" 00:08:05.964 }, 00:08:05.964 "ns_data": { 00:08:05.964 "id": 1, 00:08:05.964 "can_share": true 00:08:05.964 } 00:08:05.964 } 00:08:05.964 ], 00:08:05.964 "mp_policy": "active_passive" 00:08:05.964 } 00:08:05.964 } 00:08:05.964 ] 00:08:05.964 16:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1782644 00:08:05.964 16:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:05.964 16:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.964 Running I/O for 10 seconds... 00:08:07.348 Latency(us) 00:08:07.348 [2024-11-20T15:50:59.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.348 Nvme0n1 : 1.00 24728.00 96.59 0.00 0.00 0.00 0.00 0.00 00:08:07.348 [2024-11-20T15:50:59.524Z] =================================================================================================================== 00:08:07.348 [2024-11-20T15:50:59.524Z] Total : 24728.00 96.59 0.00 0.00 0.00 0.00 0.00 00:08:07.348 00:08:07.920 16:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:08.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.181 Nvme0n1 : 2.00 24923.50 97.36 0.00 0.00 0.00 0.00 0.00 00:08:08.181 [2024-11-20T15:51:00.357Z] =================================================================================================================== 00:08:08.181 [2024-11-20T15:51:00.357Z] Total : 24923.50 97.36 0.00 0.00 0.00 0.00 0.00 00:08:08.181 00:08:08.181 true 00:08:08.181 16:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:08.181 16:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.181 16:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:08.181 16:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:08.181 16:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1782644 00:08:09.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.123 Nvme0n1 : 3.00 24996.33 97.64 0.00 0.00 0.00 0.00 0.00 00:08:09.123 [2024-11-20T15:51:01.299Z] =================================================================================================================== 00:08:09.123 [2024-11-20T15:51:01.299Z] Total : 24996.33 97.64 0.00 0.00 0.00 0.00 0.00 00:08:09.123 00:08:10.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.066 Nvme0n1 : 4.00 25062.50 97.90 0.00 0.00 0.00 0.00 0.00 00:08:10.066 [2024-11-20T15:51:02.242Z] =================================================================================================================== 00:08:10.066 [2024-11-20T15:51:02.242Z] Total : 25062.50 97.90 0.00 0.00 0.00 0.00 0.00 00:08:10.066 00:08:11.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.006 Nvme0n1 : 5.00 25120.80 98.13 0.00 0.00 0.00 0.00 0.00 00:08:11.006 [2024-11-20T15:51:03.182Z] =================================================================================================================== 00:08:11.006 [2024-11-20T15:51:03.182Z] Total : 25120.80 98.13 0.00 0.00 0.00 0.00 0.00 00:08:11.006 00:08:11.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.949 Nvme0n1 : 6.00 25155.17 98.26 0.00 0.00 0.00 0.00 0.00 00:08:11.949 [2024-11-20T15:51:04.125Z] =================================================================================================================== 00:08:11.949 [2024-11-20T15:51:04.125Z] Total : 25155.17 98.26 0.00 0.00 0.00 0.00 0.00 00:08:11.949 00:08:13.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.347 Nvme0n1 : 7.00 25187.00 98.39 0.00 0.00 0.00 0.00 0.00 00:08:13.347 [2024-11-20T15:51:05.523Z] =================================================================================================================== 00:08:13.347 [2024-11-20T15:51:05.523Z] Total : 25187.00 98.39 0.00 0.00 0.00 0.00 0.00 00:08:13.347 00:08:14.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.285 Nvme0n1 : 8.00 25212.50 98.49 0.00 0.00 0.00 0.00 0.00 00:08:14.285 [2024-11-20T15:51:06.461Z] =================================================================================================================== 00:08:14.285 [2024-11-20T15:51:06.461Z] Total : 25212.50 98.49 0.00 0.00 0.00 0.00 0.00 00:08:14.285 00:08:15.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.225 Nvme0n1 : 9.00 25235.78 98.58 0.00 0.00 0.00 0.00 0.00 00:08:15.225 [2024-11-20T15:51:07.401Z] =================================================================================================================== 00:08:15.225 [2024-11-20T15:51:07.401Z] Total : 25235.78 98.58 0.00 0.00 0.00 0.00 0.00 00:08:15.225 00:08:16.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.164 Nvme0n1 : 10.00 25254.70 98.65 0.00 0.00 0.00 0.00 0.00 00:08:16.164 [2024-11-20T15:51:08.340Z] =================================================================================================================== 00:08:16.164 [2024-11-20T15:51:08.340Z] Total : 25254.70 98.65 0.00 0.00 0.00 0.00 0.00 00:08:16.164 00:08:16.164 00:08:16.164 Latency(us) 00:08:16.164 [2024-11-20T15:51:08.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.164 Nvme0n1 : 10.00 25249.89 98.63 0.00 0.00 5066.07 3112.96 16274.77 00:08:16.164 [2024-11-20T15:51:08.340Z] =================================================================================================================== 00:08:16.164 [2024-11-20T15:51:08.340Z] Total : 25249.89 98.63 0.00 0.00 5066.07 3112.96 16274.77 00:08:16.164 { 00:08:16.164 "results": [ 00:08:16.164 { 00:08:16.164 "job": "Nvme0n1", 00:08:16.164 "core_mask": "0x2", 00:08:16.164 "workload": "randwrite", 00:08:16.164 "status": "finished", 00:08:16.164 "queue_depth": 128, 00:08:16.164 "io_size": 4096, 00:08:16.164 "runtime": 10.004399, 00:08:16.164 "iops": 25249.892572257464, 00:08:16.164 "mibps": 98.63239286038072, 00:08:16.164 "io_failed": 0, 00:08:16.164 "io_timeout": 0, 00:08:16.164 "avg_latency_us": 5066.068576646477, 00:08:16.164 "min_latency_us": 3112.96, 00:08:16.164 "max_latency_us": 16274.773333333333 00:08:16.164 } 00:08:16.164 ], 00:08:16.164 "core_count": 1 00:08:16.164 } 00:08:16.164 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1782306 00:08:16.164 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1782306 ']' 00:08:16.164 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1782306 00:08:16.164 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1782306 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1782306' 00:08:16.165 killing process with pid 1782306 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1782306 00:08:16.165 Received shutdown signal, test time was about 10.000000 seconds 00:08:16.165 00:08:16.165 Latency(us) 00:08:16.165 [2024-11-20T15:51:08.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.165 [2024-11-20T15:51:08.341Z] =================================================================================================================== 00:08:16.165 [2024-11-20T15:51:08.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1782306 00:08:16.165 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.425 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.685 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:16.685 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:16.685 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:16.685 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:16.685 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1778487 00:08:16.685 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1778487 00:08:16.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1778487 Killed "${NVMF_APP[@]}" "$@" 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1784861 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1784861 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1784861 ']' 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.945 16:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.945 [2024-11-20 16:51:08.954076] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:16.945 [2024-11-20 16:51:08.954136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.945 [2024-11-20 16:51:09.048167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.945 [2024-11-20 16:51:09.085888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.945 [2024-11-20 16:51:09.085929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.945 [2024-11-20 16:51:09.085940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.945 [2024-11-20 16:51:09.085946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.945 [2024-11-20 16:51:09.085951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.945 [2024-11-20 16:51:09.086508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.883 [2024-11-20 16:51:09.945364] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:17.883 [2024-11-20 16:51:09.945450] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:17.883 [2024-11-20 16:51:09.945473] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:17.883 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:17.884 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.884 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:17.884 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.884 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.884 16:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.143 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d -t 2000 00:08:18.143 [ 00:08:18.144 { 00:08:18.144 "name": "e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d", 00:08:18.144 "aliases": [ 00:08:18.144 "lvs/lvol" 00:08:18.144 ], 00:08:18.144 "product_name": "Logical Volume", 00:08:18.144 "block_size": 4096, 00:08:18.144 "num_blocks": 38912, 00:08:18.144 "uuid": "e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d", 00:08:18.144 "assigned_rate_limits": { 00:08:18.144 "rw_ios_per_sec": 0, 00:08:18.144 "rw_mbytes_per_sec": 0, 00:08:18.144 "r_mbytes_per_sec": 0, 00:08:18.144 "w_mbytes_per_sec": 0 00:08:18.144 }, 00:08:18.144 "claimed": false, 00:08:18.144 "zoned": false, 00:08:18.144 "supported_io_types": { 00:08:18.144 "read": true, 00:08:18.144 "write": true, 00:08:18.144 "unmap": true, 00:08:18.144 "flush": false, 00:08:18.144 "reset": true, 00:08:18.144 "nvme_admin": false, 00:08:18.144 "nvme_io": false, 00:08:18.144 "nvme_io_md": false, 00:08:18.144 "write_zeroes": true, 00:08:18.144 "zcopy": false, 00:08:18.144 "get_zone_info": false, 00:08:18.144 "zone_management": false, 00:08:18.144 "zone_append": false, 00:08:18.144 "compare": false, 00:08:18.144 "compare_and_write": false, 00:08:18.144 "abort": false, 00:08:18.144 "seek_hole": true, 00:08:18.144 "seek_data": true, 00:08:18.144 "copy": false, 00:08:18.144 "nvme_iov_md": false 00:08:18.144 }, 00:08:18.144 "driver_specific": { 00:08:18.144 "lvol": { 00:08:18.144 "lvol_store_uuid": "d6fc7154-00d2-4fdb-9eba-f27670b828c3", 00:08:18.144 "base_bdev": "aio_bdev", 00:08:18.144 "thin_provision": false, 00:08:18.144 "num_allocated_clusters": 38, 00:08:18.144 "snapshot": false, 00:08:18.144 "clone": false, 00:08:18.144 "esnap_clone": false 00:08:18.144 } 00:08:18.144 } 00:08:18.144 } 00:08:18.144 ] 00:08:18.144 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:18.144 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:18.144 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:18.404 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:18.404 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:18.404 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:18.664 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:18.664 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.664 [2024-11-20 16:51:10.810005] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.924 16:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:18.924 request: 00:08:18.924 { 00:08:18.924 "uuid": "d6fc7154-00d2-4fdb-9eba-f27670b828c3", 00:08:18.924 "method": "bdev_lvol_get_lvstores", 00:08:18.924 "req_id": 1 00:08:18.924 } 00:08:18.924 Got JSON-RPC error response 00:08:18.924 response: 00:08:18.924 { 00:08:18.924 "code": -19, 00:08:18.924 "message": "No such device" 00:08:18.924 } 00:08:18.924 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:18.924 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.924 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.924 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.924 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.183 aio_bdev 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.183 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:19.442 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d -t 2000 00:08:19.442 [ 00:08:19.442 { 00:08:19.442 "name": "e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d", 00:08:19.442 "aliases": [ 00:08:19.442 "lvs/lvol" 00:08:19.442 ], 00:08:19.442 "product_name": "Logical Volume", 00:08:19.442 "block_size": 4096, 00:08:19.442 "num_blocks": 38912, 00:08:19.442 "uuid": "e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d", 00:08:19.442 "assigned_rate_limits": { 00:08:19.442 "rw_ios_per_sec": 0, 00:08:19.442 "rw_mbytes_per_sec": 0, 00:08:19.442 "r_mbytes_per_sec": 0, 00:08:19.442 "w_mbytes_per_sec": 0 00:08:19.442 }, 00:08:19.442 "claimed": false, 00:08:19.442 "zoned": false, 00:08:19.442 "supported_io_types": { 00:08:19.442 "read": true, 00:08:19.442 "write": true, 00:08:19.442 "unmap": true, 00:08:19.442 "flush": false, 00:08:19.442 "reset": true, 00:08:19.442 "nvme_admin": false, 00:08:19.442 "nvme_io": false, 00:08:19.442 "nvme_io_md": false, 00:08:19.443 "write_zeroes": true, 00:08:19.443 "zcopy": false, 00:08:19.443 "get_zone_info": false, 00:08:19.443 "zone_management": false, 00:08:19.443 "zone_append": false, 00:08:19.443 "compare": false, 00:08:19.443 "compare_and_write": false, 00:08:19.443 "abort": false, 00:08:19.443 "seek_hole": true, 00:08:19.443 "seek_data": true, 00:08:19.443 "copy": false, 00:08:19.443 "nvme_iov_md": false 00:08:19.443 }, 00:08:19.443 "driver_specific": { 00:08:19.443 "lvol": { 00:08:19.443 "lvol_store_uuid": "d6fc7154-00d2-4fdb-9eba-f27670b828c3", 00:08:19.443 "base_bdev": "aio_bdev", 00:08:19.443 "thin_provision": false, 00:08:19.443 "num_allocated_clusters": 38, 00:08:19.443 "snapshot": false, 00:08:19.443 "clone": false, 00:08:19.443 "esnap_clone": false 00:08:19.443 } 00:08:19.443 } 00:08:19.443 } 00:08:19.443 ] 00:08:19.443 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:19.443 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:19.443 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.702 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.702 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:19.702 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:19.961 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:19.961 16:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e09110d3-d74c-4a5b-99d1-fdbf0a90ac7d 00:08:19.961 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6fc7154-00d2-4fdb-9eba-f27670b828c3 00:08:20.220 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.480 00:08:20.480 real 0m17.550s 00:08:20.480 user 0m45.835s 00:08:20.480 sys 0m3.057s 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.480 ************************************ 00:08:20.480 END TEST lvs_grow_dirty 00:08:20.480 ************************************ 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:20.480 nvmf_trace.0 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.480 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.480 rmmod nvme_tcp 00:08:20.480 rmmod nvme_fabrics 00:08:20.740 rmmod nvme_keyring 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1784861 ']' 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1784861 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1784861 ']' 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1784861 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784861 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784861' 00:08:20.740 killing process with pid 1784861 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1784861 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1784861 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.740 16:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.279 00:08:23.279 real 0m44.920s 00:08:23.279 user 1m7.998s 00:08:23.279 sys 0m10.634s 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.279 ************************************ 00:08:23.279 END TEST nvmf_lvs_grow 00:08:23.279 ************************************ 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.279 16:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.279 ************************************ 00:08:23.279 START TEST nvmf_bdev_io_wait 00:08:23.279 ************************************ 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:23.279 * Looking for test storage... 00:08:23.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.279 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.280 --rc genhtml_branch_coverage=1 00:08:23.280 --rc genhtml_function_coverage=1 00:08:23.280 --rc genhtml_legend=1 00:08:23.280 --rc geninfo_all_blocks=1 00:08:23.280 --rc geninfo_unexecuted_blocks=1 00:08:23.280 00:08:23.280 ' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.280 --rc genhtml_branch_coverage=1 00:08:23.280 --rc genhtml_function_coverage=1 00:08:23.280 --rc genhtml_legend=1 00:08:23.280 --rc geninfo_all_blocks=1 00:08:23.280 --rc geninfo_unexecuted_blocks=1 00:08:23.280 00:08:23.280 ' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.280 --rc genhtml_branch_coverage=1 00:08:23.280 --rc genhtml_function_coverage=1 00:08:23.280 --rc genhtml_legend=1 00:08:23.280 --rc geninfo_all_blocks=1 00:08:23.280 --rc geninfo_unexecuted_blocks=1 00:08:23.280 00:08:23.280 ' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.280 --rc genhtml_branch_coverage=1 00:08:23.280 --rc genhtml_function_coverage=1 00:08:23.280 --rc genhtml_legend=1 00:08:23.280 --rc geninfo_all_blocks=1 00:08:23.280 --rc geninfo_unexecuted_blocks=1 00:08:23.280 00:08:23.280 ' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.280 16:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.542 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:31.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:31.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:31.543 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:31.543 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:08:31.543 00:08:31.543 --- 10.0.0.2 ping statistics --- 00:08:31.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.543 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:08:31.543 00:08:31.543 --- 10.0.0.1 ping statistics --- 00:08:31.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.543 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1790397 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1790397 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1790397 ']' 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.543 16:51:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 [2024-11-20 16:51:22.853968] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:31.543 [2024-11-20 16:51:22.854032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.543 [2024-11-20 16:51:22.956349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.543 [2024-11-20 16:51:23.011550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.544 [2024-11-20 16:51:23.011605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.544 [2024-11-20 16:51:23.011614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.544 [2024-11-20 16:51:23.011621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.544 [2024-11-20 16:51:23.011628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.544 [2024-11-20 16:51:23.013743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.544 [2024-11-20 16:51:23.013904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.544 [2024-11-20 16:51:23.014066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.544 [2024-11-20 16:51:23.014067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.544 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.544 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:31.544 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.544 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.544 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 [2024-11-20 16:51:23.809293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 Malloc0 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.804 [2024-11-20 16:51:23.874743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1790670 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1790672 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:31.804 { 00:08:31.804 "params": { 00:08:31.804 "name": "Nvme$subsystem", 00:08:31.804 "trtype": "$TEST_TRANSPORT", 00:08:31.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.804 "adrfam": "ipv4", 00:08:31.804 "trsvcid": "$NVMF_PORT", 00:08:31.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.804 "hdgst": ${hdgst:-false}, 00:08:31.804 "ddgst": ${ddgst:-false} 00:08:31.804 }, 00:08:31.804 "method": "bdev_nvme_attach_controller" 00:08:31.804 } 00:08:31.804 EOF 00:08:31.804 )") 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1790674 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:31.804 { 00:08:31.804 "params": { 00:08:31.804 "name": "Nvme$subsystem", 00:08:31.804 "trtype": "$TEST_TRANSPORT", 00:08:31.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.804 "adrfam": "ipv4", 00:08:31.804 "trsvcid": "$NVMF_PORT", 00:08:31.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.804 "hdgst": ${hdgst:-false}, 00:08:31.804 "ddgst": ${ddgst:-false} 00:08:31.804 }, 00:08:31.804 "method": "bdev_nvme_attach_controller" 00:08:31.804 } 00:08:31.804 EOF 00:08:31.804 )") 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1790677 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:31.804 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:31.805 { 00:08:31.805 "params": { 00:08:31.805 "name": "Nvme$subsystem", 00:08:31.805 "trtype": "$TEST_TRANSPORT", 00:08:31.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.805 "adrfam": "ipv4", 00:08:31.805 "trsvcid": "$NVMF_PORT", 00:08:31.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.805 "hdgst": ${hdgst:-false}, 00:08:31.805 "ddgst": ${ddgst:-false} 00:08:31.805 }, 00:08:31.805 "method": "bdev_nvme_attach_controller" 00:08:31.805 } 00:08:31.805 EOF 00:08:31.805 )") 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:31.805 { 00:08:31.805 "params": { 00:08:31.805 "name": "Nvme$subsystem", 00:08:31.805 "trtype": "$TEST_TRANSPORT", 00:08:31.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.805 "adrfam": "ipv4", 00:08:31.805 "trsvcid": "$NVMF_PORT", 00:08:31.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.805 "hdgst": ${hdgst:-false}, 00:08:31.805 "ddgst": ${ddgst:-false} 00:08:31.805 }, 00:08:31.805 "method": "bdev_nvme_attach_controller" 00:08:31.805 } 00:08:31.805 EOF 00:08:31.805 )") 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1790670 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:31.805 "params": { 00:08:31.805 "name": "Nvme1", 00:08:31.805 "trtype": "tcp", 00:08:31.805 "traddr": "10.0.0.2", 00:08:31.805 "adrfam": "ipv4", 00:08:31.805 "trsvcid": "4420", 00:08:31.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.805 "hdgst": false, 00:08:31.805 "ddgst": false 00:08:31.805 }, 00:08:31.805 "method": "bdev_nvme_attach_controller" 00:08:31.805 }' 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:31.805 "params": { 00:08:31.805 "name": "Nvme1", 00:08:31.805 "trtype": "tcp", 00:08:31.805 "traddr": "10.0.0.2", 00:08:31.805 "adrfam": "ipv4", 00:08:31.805 "trsvcid": "4420", 00:08:31.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.805 "hdgst": false, 00:08:31.805 "ddgst": false 00:08:31.805 }, 00:08:31.805 "method": "bdev_nvme_attach_controller" 00:08:31.805 }' 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:31.805 "params": { 00:08:31.805 "name": "Nvme1", 00:08:31.805 "trtype": "tcp", 00:08:31.805 "traddr": "10.0.0.2", 00:08:31.805 "adrfam": "ipv4", 00:08:31.805 "trsvcid": "4420", 00:08:31.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.805 "hdgst": false, 00:08:31.805 "ddgst": false 00:08:31.805 }, 00:08:31.805 "method": "bdev_nvme_attach_controller" 00:08:31.805 }' 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:31.805 16:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:31.805 "params": { 00:08:31.805 "name": "Nvme1", 00:08:31.805 "trtype": "tcp", 00:08:31.805 "traddr": "10.0.0.2", 00:08:31.805 "adrfam": "ipv4", 00:08:31.805 "trsvcid": "4420", 00:08:31.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.805 "hdgst": false, 00:08:31.805 "ddgst": false 00:08:31.805 }, 00:08:31.805 "method": "bdev_nvme_attach_controller" 00:08:31.805 }' 00:08:31.805 [2024-11-20 16:51:23.934343] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:31.805 [2024-11-20 16:51:23.934414] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:31.805 [2024-11-20 16:51:23.935355] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:31.805 [2024-11-20 16:51:23.935419] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:31.805 [2024-11-20 16:51:23.935897] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:31.805 [2024-11-20 16:51:23.935952] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:31.805 [2024-11-20 16:51:23.939454] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:31.805 [2024-11-20 16:51:23.939525] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:32.065 [2024-11-20 16:51:24.152678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.065 [2024-11-20 16:51:24.192905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.326 [2024-11-20 16:51:24.244771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.326 [2024-11-20 16:51:24.284869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:32.326 [2024-11-20 16:51:24.337434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.326 [2024-11-20 16:51:24.379360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:32.326 [2024-11-20 16:51:24.389149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.326 [2024-11-20 16:51:24.427138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:32.586 Running I/O for 1 seconds... 00:08:32.586 Running I/O for 1 seconds... 00:08:32.586 Running I/O for 1 seconds... 00:08:32.586 Running I/O for 1 seconds... 00:08:33.527 6824.00 IOPS, 26.66 MiB/s 00:08:33.527 Latency(us) 00:08:33.527 [2024-11-20T15:51:25.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.527 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:33.527 Nvme1n1 : 1.02 6836.18 26.70 0.00 0.00 18560.48 7973.55 28398.93 00:08:33.527 [2024-11-20T15:51:25.703Z] =================================================================================================================== 00:08:33.527 [2024-11-20T15:51:25.703Z] Total : 6836.18 26.70 0.00 0.00 18560.48 7973.55 28398.93 00:08:33.527 10556.00 IOPS, 41.23 MiB/s [2024-11-20T15:51:25.703Z] 6785.00 IOPS, 26.50 MiB/s 00:08:33.527 Latency(us) 00:08:33.527 [2024-11-20T15:51:25.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.527 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:33.527 Nvme1n1 : 1.01 10616.35 41.47 0.00 0.00 12007.29 6171.31 24466.77 00:08:33.527 [2024-11-20T15:51:25.703Z] =================================================================================================================== 00:08:33.527 [2024-11-20T15:51:25.703Z] Total : 10616.35 41.47 0.00 0.00 12007.29 6171.31 24466.77 00:08:33.527 00:08:33.527 Latency(us) 00:08:33.527 [2024-11-20T15:51:25.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.527 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:33.527 Nvme1n1 : 1.01 6901.83 26.96 0.00 0.00 18494.00 4396.37 38447.79 00:08:33.527 [2024-11-20T15:51:25.703Z] =================================================================================================================== 00:08:33.527 [2024-11-20T15:51:25.703Z] Total : 6901.83 26.96 0.00 0.00 18494.00 4396.37 38447.79 00:08:33.788 182712.00 IOPS, 713.72 MiB/s 00:08:33.788 Latency(us) 00:08:33.788 [2024-11-20T15:51:25.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.788 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:33.788 Nvme1n1 : 1.00 182347.46 712.29 0.00 0.00 698.18 300.37 1979.73 00:08:33.788 [2024-11-20T15:51:25.964Z] =================================================================================================================== 00:08:33.788 [2024-11-20T15:51:25.964Z] Total : 182347.46 712.29 0.00 0.00 698.18 300.37 1979.73 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1790672 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1790674 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1790677 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.788 rmmod nvme_tcp 00:08:33.788 rmmod nvme_fabrics 00:08:33.788 rmmod nvme_keyring 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1790397 ']' 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1790397 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1790397 ']' 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1790397 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.788 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1790397 00:08:34.049 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.049 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.049 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1790397' 00:08:34.049 killing process with pid 1790397 00:08:34.049 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1790397 00:08:34.049 16:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1790397 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.049 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.050 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.050 16:51:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.592 00:08:36.592 real 0m13.174s 00:08:36.592 user 0m20.104s 00:08:36.592 sys 0m7.379s 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 ************************************ 00:08:36.592 END TEST nvmf_bdev_io_wait 00:08:36.592 ************************************ 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.592 ************************************ 00:08:36.592 START TEST nvmf_queue_depth 00:08:36.592 ************************************ 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.592 * Looking for test storage... 00:08:36.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.592 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.593 --rc genhtml_branch_coverage=1 00:08:36.593 --rc genhtml_function_coverage=1 00:08:36.593 --rc genhtml_legend=1 00:08:36.593 --rc geninfo_all_blocks=1 00:08:36.593 --rc geninfo_unexecuted_blocks=1 00:08:36.593 00:08:36.593 ' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.593 --rc genhtml_branch_coverage=1 00:08:36.593 --rc genhtml_function_coverage=1 00:08:36.593 --rc genhtml_legend=1 00:08:36.593 --rc geninfo_all_blocks=1 00:08:36.593 --rc geninfo_unexecuted_blocks=1 00:08:36.593 00:08:36.593 ' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.593 --rc genhtml_branch_coverage=1 00:08:36.593 --rc genhtml_function_coverage=1 00:08:36.593 --rc genhtml_legend=1 00:08:36.593 --rc geninfo_all_blocks=1 00:08:36.593 --rc geninfo_unexecuted_blocks=1 00:08:36.593 00:08:36.593 ' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.593 --rc genhtml_branch_coverage=1 00:08:36.593 --rc genhtml_function_coverage=1 00:08:36.593 --rc genhtml_legend=1 00:08:36.593 --rc geninfo_all_blocks=1 00:08:36.593 --rc geninfo_unexecuted_blocks=1 00:08:36.593 00:08:36.593 ' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:36.593 16:51:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:44.738 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:44.738 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.738 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:44.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:44.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.739 16:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:08:44.739 00:08:44.739 --- 10.0.0.2 ping statistics --- 00:08:44.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.739 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:08:44.739 00:08:44.739 --- 10.0.0.1 ping statistics --- 00:08:44.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.739 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1795372 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1795372 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1795372 ']' 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.739 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.739 [2024-11-20 16:51:36.124843] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:44.739 [2024-11-20 16:51:36.124910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.739 [2024-11-20 16:51:36.229963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.739 [2024-11-20 16:51:36.280270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.739 [2024-11-20 16:51:36.280319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.739 [2024-11-20 16:51:36.280328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.739 [2024-11-20 16:51:36.280335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.739 [2024-11-20 16:51:36.280341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.739 [2024-11-20 16:51:36.281047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.001 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.001 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:45.001 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.001 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.001 16:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 [2024-11-20 16:51:37.012644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 Malloc0 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 [2024-11-20 16:51:37.073928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1795573 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1795573 /var/tmp/bdevperf.sock 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1795573 ']' 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.001 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 [2024-11-20 16:51:37.133103] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:08:45.001 [2024-11-20 16:51:37.133176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795573 ] 00:08:45.262 [2024-11-20 16:51:37.224312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.262 [2024-11-20 16:51:37.277056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.830 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.830 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:45.830 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:45.830 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.830 16:51:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.089 NVMe0n1 00:08:46.089 16:51:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.089 16:51:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:46.089 Running I/O for 10 seconds... 00:08:48.412 9657.00 IOPS, 37.72 MiB/s [2024-11-20T15:51:41.530Z] 10676.50 IOPS, 41.71 MiB/s [2024-11-20T15:51:42.470Z] 10925.00 IOPS, 42.68 MiB/s [2024-11-20T15:51:43.411Z] 11381.00 IOPS, 44.46 MiB/s [2024-11-20T15:51:44.349Z] 11829.80 IOPS, 46.21 MiB/s [2024-11-20T15:51:45.289Z] 12098.83 IOPS, 47.26 MiB/s [2024-11-20T15:51:46.229Z] 12286.86 IOPS, 48.00 MiB/s [2024-11-20T15:51:47.610Z] 12442.62 IOPS, 48.60 MiB/s [2024-11-20T15:51:48.551Z] 12588.11 IOPS, 49.17 MiB/s [2024-11-20T15:51:48.551Z] 12677.20 IOPS, 49.52 MiB/s 00:08:56.375 Latency(us) 00:08:56.375 [2024-11-20T15:51:48.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.375 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:56.375 Verification LBA range: start 0x0 length 0x4000 00:08:56.375 NVMe0n1 : 10.06 12693.91 49.59 0.00 0.00 80374.54 25231.36 69031.25 00:08:56.375 [2024-11-20T15:51:48.551Z] =================================================================================================================== 00:08:56.375 [2024-11-20T15:51:48.551Z] Total : 12693.91 49.59 0.00 0.00 80374.54 25231.36 69031.25 00:08:56.375 { 00:08:56.375 "results": [ 00:08:56.375 { 00:08:56.375 "job": "NVMe0n1", 00:08:56.375 "core_mask": "0x1", 00:08:56.375 "workload": "verify", 00:08:56.375 "status": "finished", 00:08:56.375 "verify_range": { 00:08:56.375 "start": 0, 00:08:56.375 "length": 16384 00:08:56.375 }, 00:08:56.375 "queue_depth": 1024, 00:08:56.375 "io_size": 4096, 00:08:56.375 "runtime": 10.06349, 00:08:56.375 "iops": 12693.906388340427, 00:08:56.375 "mibps": 49.58557182945479, 00:08:56.375 "io_failed": 0, 00:08:56.375 "io_timeout": 0, 00:08:56.375 "avg_latency_us": 80374.54377475962, 00:08:56.375 "min_latency_us": 25231.36, 00:08:56.375 "max_latency_us": 69031.25333333333 00:08:56.375 } 00:08:56.375 ], 00:08:56.375 "core_count": 1 00:08:56.375 } 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1795573 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1795573 ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1795573 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795573 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795573' 00:08:56.375 killing process with pid 1795573 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1795573 00:08:56.375 Received shutdown signal, test time was about 10.000000 seconds 00:08:56.375 00:08:56.375 Latency(us) 00:08:56.375 [2024-11-20T15:51:48.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.375 [2024-11-20T15:51:48.551Z] =================================================================================================================== 00:08:56.375 [2024-11-20T15:51:48.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1795573 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.375 rmmod nvme_tcp 00:08:56.375 rmmod nvme_fabrics 00:08:56.375 rmmod nvme_keyring 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1795372 ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1795372 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1795372 ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1795372 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.375 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795372 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795372' 00:08:56.636 killing process with pid 1795372 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1795372 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1795372 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.636 16:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.190 00:08:59.190 real 0m22.476s 00:08:59.190 user 0m25.305s 00:08:59.190 sys 0m7.298s 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.190 ************************************ 00:08:59.190 END TEST nvmf_queue_depth 00:08:59.190 ************************************ 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.190 ************************************ 00:08:59.190 START TEST nvmf_target_multipath 00:08:59.190 ************************************ 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:59.190 * Looking for test storage... 00:08:59.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.190 16:51:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.191 --rc genhtml_branch_coverage=1 00:08:59.191 --rc genhtml_function_coverage=1 00:08:59.191 --rc genhtml_legend=1 00:08:59.191 --rc geninfo_all_blocks=1 00:08:59.191 --rc geninfo_unexecuted_blocks=1 00:08:59.191 00:08:59.191 ' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.191 --rc genhtml_branch_coverage=1 00:08:59.191 --rc genhtml_function_coverage=1 00:08:59.191 --rc genhtml_legend=1 00:08:59.191 --rc geninfo_all_blocks=1 00:08:59.191 --rc geninfo_unexecuted_blocks=1 00:08:59.191 00:08:59.191 ' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.191 --rc genhtml_branch_coverage=1 00:08:59.191 --rc genhtml_function_coverage=1 00:08:59.191 --rc genhtml_legend=1 00:08:59.191 --rc geninfo_all_blocks=1 00:08:59.191 --rc geninfo_unexecuted_blocks=1 00:08:59.191 00:08:59.191 ' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.191 --rc genhtml_branch_coverage=1 00:08:59.191 --rc genhtml_function_coverage=1 00:08:59.191 --rc genhtml_legend=1 00:08:59.191 --rc geninfo_all_blocks=1 00:08:59.191 --rc geninfo_unexecuted_blocks=1 00:08:59.191 00:08:59.191 ' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.191 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.192 16:51:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:07.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:07.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:07.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:07.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.331 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:09:07.332 00:09:07.332 --- 10.0.0.2 ping statistics --- 00:09:07.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.332 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:09:07.332 00:09:07.332 --- 10.0.0.1 ping statistics --- 00:09:07.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.332 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:07.332 only one NIC for nvmf test 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.332 rmmod nvme_tcp 00:09:07.332 rmmod nvme_fabrics 00:09:07.332 rmmod nvme_keyring 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.332 16:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.716 00:09:08.716 real 0m9.916s 00:09:08.716 user 0m2.166s 00:09:08.716 sys 0m5.715s 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:08.716 ************************************ 00:09:08.716 END TEST nvmf_target_multipath 00:09:08.716 ************************************ 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.716 ************************************ 00:09:08.716 START TEST nvmf_zcopy 00:09:08.716 ************************************ 00:09:08.716 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:08.978 * Looking for test storage... 00:09:08.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.978 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.978 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.978 16:52:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.978 --rc genhtml_branch_coverage=1 00:09:08.978 --rc genhtml_function_coverage=1 00:09:08.978 --rc genhtml_legend=1 00:09:08.978 --rc geninfo_all_blocks=1 00:09:08.978 --rc geninfo_unexecuted_blocks=1 00:09:08.978 00:09:08.978 ' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.978 --rc genhtml_branch_coverage=1 00:09:08.978 --rc genhtml_function_coverage=1 00:09:08.978 --rc genhtml_legend=1 00:09:08.978 --rc geninfo_all_blocks=1 00:09:08.978 --rc geninfo_unexecuted_blocks=1 00:09:08.978 00:09:08.978 ' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.978 --rc genhtml_branch_coverage=1 00:09:08.978 --rc genhtml_function_coverage=1 00:09:08.978 --rc genhtml_legend=1 00:09:08.978 --rc geninfo_all_blocks=1 00:09:08.978 --rc geninfo_unexecuted_blocks=1 00:09:08.978 00:09:08.978 ' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.978 --rc genhtml_branch_coverage=1 00:09:08.978 --rc genhtml_function_coverage=1 00:09:08.978 --rc genhtml_legend=1 00:09:08.978 --rc geninfo_all_blocks=1 00:09:08.978 --rc geninfo_unexecuted_blocks=1 00:09:08.978 00:09:08.978 ' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.978 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.979 16:52:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.118 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:17.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:17.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:17.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:17.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:09:17.119 00:09:17.119 --- 10.0.0.2 ping statistics --- 00:09:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.119 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:09:17.119 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:09:17.119 00:09:17.120 --- 10.0.0.1 ping statistics --- 00:09:17.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.120 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1806390 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1806390 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1806390 ']' 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.120 16:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.120 [2024-11-20 16:52:08.706464] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:09:17.120 [2024-11-20 16:52:08.706538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.120 [2024-11-20 16:52:08.806977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.120 [2024-11-20 16:52:08.856521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.120 [2024-11-20 16:52:08.856571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.120 [2024-11-20 16:52:08.856580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.120 [2024-11-20 16:52:08.856587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.120 [2024-11-20 16:52:08.856594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.120 [2024-11-20 16:52:08.857365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.382 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.382 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:17.382 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.382 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.382 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 [2024-11-20 16:52:09.584930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 [2024-11-20 16:52:09.609227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 malloc0 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.648 { 00:09:17.648 "params": { 00:09:17.648 "name": "Nvme$subsystem", 00:09:17.648 "trtype": "$TEST_TRANSPORT", 00:09:17.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.648 "adrfam": "ipv4", 00:09:17.648 "trsvcid": "$NVMF_PORT", 00:09:17.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.648 "hdgst": ${hdgst:-false}, 00:09:17.648 "ddgst": ${ddgst:-false} 00:09:17.648 }, 00:09:17.648 "method": "bdev_nvme_attach_controller" 00:09:17.648 } 00:09:17.648 EOF 00:09:17.648 )") 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:17.648 16:52:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.648 "params": { 00:09:17.648 "name": "Nvme1", 00:09:17.648 "trtype": "tcp", 00:09:17.648 "traddr": "10.0.0.2", 00:09:17.648 "adrfam": "ipv4", 00:09:17.648 "trsvcid": "4420", 00:09:17.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.648 "hdgst": false, 00:09:17.648 "ddgst": false 00:09:17.648 }, 00:09:17.648 "method": "bdev_nvme_attach_controller" 00:09:17.648 }' 00:09:17.648 [2024-11-20 16:52:09.709334] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:09:17.648 [2024-11-20 16:52:09.709400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806449 ] 00:09:17.648 [2024-11-20 16:52:09.803274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.978 [2024-11-20 16:52:09.857745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.262 Running I/O for 10 seconds... 00:09:20.147 6457.00 IOPS, 50.45 MiB/s [2024-11-20T15:52:13.265Z] 7531.50 IOPS, 58.84 MiB/s [2024-11-20T15:52:14.650Z] 8264.33 IOPS, 64.57 MiB/s [2024-11-20T15:52:15.593Z] 8634.75 IOPS, 67.46 MiB/s [2024-11-20T15:52:16.535Z] 8854.60 IOPS, 69.18 MiB/s [2024-11-20T15:52:17.478Z] 9000.00 IOPS, 70.31 MiB/s [2024-11-20T15:52:18.417Z] 9102.29 IOPS, 71.11 MiB/s [2024-11-20T15:52:19.360Z] 9179.25 IOPS, 71.71 MiB/s [2024-11-20T15:52:20.301Z] 9238.33 IOPS, 72.17 MiB/s [2024-11-20T15:52:20.301Z] 9286.80 IOPS, 72.55 MiB/s 00:09:28.126 Latency(us) 00:09:28.126 [2024-11-20T15:52:20.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.126 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:28.126 Verification LBA range: start 0x0 length 0x1000 00:09:28.126 Nvme1n1 : 10.01 9286.34 72.55 0.00 0.00 13737.30 2334.72 28398.93 00:09:28.126 [2024-11-20T15:52:20.302Z] =================================================================================================================== 00:09:28.126 [2024-11-20T15:52:20.302Z] Total : 9286.34 72.55 0.00 0.00 13737.30 2334.72 28398.93 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1808607 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:28.387 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:28.387 { 00:09:28.387 "params": { 00:09:28.387 "name": "Nvme$subsystem", 00:09:28.387 "trtype": "$TEST_TRANSPORT", 00:09:28.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.387 "adrfam": "ipv4", 00:09:28.387 "trsvcid": "$NVMF_PORT", 00:09:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.387 "hdgst": ${hdgst:-false}, 00:09:28.387 "ddgst": ${ddgst:-false} 00:09:28.387 }, 00:09:28.387 "method": "bdev_nvme_attach_controller" 00:09:28.387 } 00:09:28.387 EOF 00:09:28.387 )") 00:09:28.388 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:28.388 [2024-11-20 16:52:20.373168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.373200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:28.388 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:28.388 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:28.388 "params": { 00:09:28.388 "name": "Nvme1", 00:09:28.388 "trtype": "tcp", 00:09:28.388 "traddr": "10.0.0.2", 00:09:28.388 "adrfam": "ipv4", 00:09:28.388 "trsvcid": "4420", 00:09:28.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.388 "hdgst": false, 00:09:28.388 "ddgst": false 00:09:28.388 }, 00:09:28.388 "method": "bdev_nvme_attach_controller" 00:09:28.388 }' 00:09:28.388 [2024-11-20 16:52:20.385162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.385171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.397190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.397197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.409217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.409225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.416017] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:09:28.388 [2024-11-20 16:52:20.416064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808607 ] 00:09:28.388 [2024-11-20 16:52:20.421248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.421255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.433278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.433285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.445309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.445317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.457340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.457346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.469371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.469379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.481402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.481410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.493431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.493439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.498316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.388 [2024-11-20 16:52:20.505463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.505472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.517494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.517503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.528264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.388 [2024-11-20 16:52:20.529522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.529532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.541556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.541565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.388 [2024-11-20 16:52:20.553587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.388 [2024-11-20 16:52:20.553599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.565616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.565627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.577646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.577656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.589676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.589684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.601708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.601716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.613754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.613771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.625776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.625785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.637807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.637816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.649837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.649844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.661867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.661874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.673898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.673907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.685932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.685942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.697964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.697973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 [2024-11-20 16:52:20.710000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.650 [2024-11-20 16:52:20.710014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.650 Running I/O for 5 seconds... 00:09:28.650 [2024-11-20 16:52:20.722026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.722034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.737277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.737293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.750520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.750536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.764014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.764029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.776950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.776965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.789938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.789953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.803599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.803614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.651 [2024-11-20 16:52:20.817141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.651 [2024-11-20 16:52:20.817157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.830274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.830289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.843496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.843511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.857080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.857095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.870364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.870379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.883339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.883354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.896187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.896202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.908723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.911 [2024-11-20 16:52:20.908737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.911 [2024-11-20 16:52:20.921392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.921407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:20.933811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.933825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:20.946428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.946443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:20.959781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.959796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:20.973395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.973409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:20.986172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.986186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:20.999150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:20.999173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:21.011926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:21.011941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:21.024586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:21.024601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:21.038312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:21.038327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:21.052088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:21.052103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:21.064635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:21.064649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.912 [2024-11-20 16:52:21.077228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.912 [2024-11-20 16:52:21.077242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.089766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.089781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.102698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.102713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.116166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.116180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.129669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.129683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.142296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.142311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.155687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.155702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.168952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.168966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.181346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.181360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.194712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.194727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.207366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.207381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.220365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.220379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.233747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.233761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.246235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.246253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.260110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.260124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.273133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.273149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.286663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.286677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.300200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.300214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.313119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.313133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.326486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.326500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.173 [2024-11-20 16:52:21.339872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.173 [2024-11-20 16:52:21.339886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.352905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.352920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.365655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.365670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.378984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.378999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.391721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.391735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.405405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.405419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.417892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.417906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.430649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.430664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.444298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.444313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.457280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.457294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.470735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.470749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.483793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.483807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.497292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.497310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.510141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.510155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.522644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.522658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.534921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.534935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.548354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.548369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.561990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.562005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.574622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.574636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.587775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.587789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.434 [2024-11-20 16:52:21.601120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.434 [2024-11-20 16:52:21.601135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.614594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.614608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.627579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.627594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.640495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.640510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.653465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.653479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.666727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.666741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.679994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.680008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.693389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.693404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.707212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.707226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 18986.00 IOPS, 148.33 MiB/s [2024-11-20T15:52:21.870Z] [2024-11-20 16:52:21.720558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.720573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.733425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.733440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.745782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.745796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.759401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.759416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.771930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.771945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.785024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.694 [2024-11-20 16:52:21.785039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.694 [2024-11-20 16:52:21.798345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.695 [2024-11-20 16:52:21.798360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.695 [2024-11-20 16:52:21.810861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.695 [2024-11-20 16:52:21.810877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.695 [2024-11-20 16:52:21.824492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.695 [2024-11-20 16:52:21.824507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.695 [2024-11-20 16:52:21.837545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.695 [2024-11-20 16:52:21.837560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.695 [2024-11-20 16:52:21.850538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.695 [2024-11-20 16:52:21.850552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.695 [2024-11-20 16:52:21.863833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.695 [2024-11-20 16:52:21.863848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.877448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.877462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.890583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.890597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.903966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.903981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.917683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.917697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.931195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.931209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.943796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.943811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.956537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.956552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.969997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.970012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.982979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.982994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:21.995657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:21.995672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.009249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.009264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.022723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.022738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.035202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.035217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.048805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.048821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.061390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.061405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.074698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.074712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.087570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.087585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.100391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.100406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.113237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.113253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.956 [2024-11-20 16:52:22.125544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.956 [2024-11-20 16:52:22.125559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.138932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.138947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.152533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.152548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.165527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.165542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.179307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.179321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.192847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.192861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.206497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.206512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.219626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.219641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.233246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.233260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.246532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.246547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.260037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.260052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.273418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.273433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.286444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.286459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.298979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.298994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.311827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.311841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.325474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.325489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.338638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.338653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.352013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.352028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.365402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.365416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.217 [2024-11-20 16:52:22.378929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.217 [2024-11-20 16:52:22.378944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.391993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.392009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.405641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.405656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.418485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.418500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.432162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.432177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.445444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.445459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.459097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.459112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.472227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.478 [2024-11-20 16:52:22.472241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.478 [2024-11-20 16:52:22.485681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.485700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.498850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.498865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.511564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.511578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.524216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.524230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.537336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.537351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.550549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.550563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.564046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.564060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.577887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.577902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.590669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.590683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.603765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.603779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.617435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.617449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.630317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.630331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.479 [2024-11-20 16:52:22.643814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.479 [2024-11-20 16:52:22.643829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.657332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.657347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.669776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.669790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.682511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.682525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.695284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.695298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.707659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.707673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 19102.00 IOPS, 149.23 MiB/s [2024-11-20T15:52:22.917Z] [2024-11-20 16:52:22.721036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.721051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.734263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.734281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.747722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.747736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.761243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.761258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.775035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.775049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.788074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.788088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.800907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.800921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.813569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.813583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.827168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.827182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.839790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.839804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.852951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.852965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.865815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.865830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.878642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.878656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.892235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.892249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.741 [2024-11-20 16:52:22.905774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.741 [2024-11-20 16:52:22.905788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.918302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.918316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.930943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.930957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.943783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.943797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.957264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.957278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.970809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.970823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.984059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.984077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:22.997762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:22.997777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.010686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.010700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.023202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.023216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.035967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.035981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.049396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.049410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.062391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.062405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.075102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.075116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.087609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.087623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.100093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.100107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.113210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.113225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.126595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.126609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.139406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.139420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.152719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.152734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.003 [2024-11-20 16:52:23.166414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.003 [2024-11-20 16:52:23.166428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.179728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.179742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.193155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.193172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.206767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.206781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.220165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.220180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.233609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.233623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.247260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.247274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.259781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.259795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.272597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.272611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.285931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.285945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.298662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.298676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.311584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.311598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.325183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.325198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.338333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.338347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.351967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.351982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.365810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.365825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.377866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.377881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.391105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.391120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.403608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.403623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.415934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.265 [2024-11-20 16:52:23.415948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.265 [2024-11-20 16:52:23.429470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.266 [2024-11-20 16:52:23.429485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.442917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.442932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.455899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.455914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.469573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.469588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.482109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.482123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.495903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.495918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.508983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.508998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.521664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.521680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.534434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.534449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.547021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.547036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.559955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.559970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.573218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.573232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.585624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.585638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.599599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.599614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.612382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.612397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.626030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.626045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.639932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.639946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.652699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.652714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.666256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.666271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.679947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.679962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.527 [2024-11-20 16:52:23.692479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.527 [2024-11-20 16:52:23.692494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.705678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.705693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.719547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.719562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 19124.33 IOPS, 149.41 MiB/s [2024-11-20T15:52:23.964Z] [2024-11-20 16:52:23.732320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.732335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.744962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.744976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.758525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.758540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.771502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.771516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.784762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.784776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-20 16:52:23.797887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-20 16:52:23.797902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.811586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.811601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.822680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.822694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.835957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.835971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.849157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.849177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.861604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.861618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.875052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.875067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.888399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.888413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.900995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.901010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.913336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.913351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.926696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.926711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.940275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.940290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.789 [2024-11-20 16:52:23.953367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.789 [2024-11-20 16:52:23.953382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:23.965986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:23.966005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:23.979288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:23.979302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:23.992395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:23.992410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.005170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.005185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.018072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.018087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.031463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.031477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.044305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.044320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.057389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.057403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.070778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.070793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.084142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.084156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.097732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.097747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.110491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.049 [2024-11-20 16:52:24.110505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.049 [2024-11-20 16:52:24.124062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.124076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.137814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.137828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.151407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.151422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.163793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.163807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.176113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.176127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.189292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.189307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.202209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.202223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.050 [2024-11-20 16:52:24.216032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.050 [2024-11-20 16:52:24.216050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.228805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.228820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.242401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.242415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.255282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.255296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.268917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.268931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.282549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.282564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.295343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.295357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.308794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.308807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.310 [2024-11-20 16:52:24.322198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.310 [2024-11-20 16:52:24.322212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.335892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.335906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.348885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.348899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.361428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.361442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.374937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.374952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.387668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.387682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.400321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.400335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.412877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.412891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.426067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.426081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.438971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.438985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.452496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.452511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.465553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.465572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.311 [2024-11-20 16:52:24.479023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.311 [2024-11-20 16:52:24.479038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.491602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.491616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.504040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.504054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.516599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.516613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.530531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.530545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.543385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.543399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.556712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.556726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.569501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.569516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.582075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.582089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.595306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.595320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.608282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.608296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.621404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.621418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.634529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.634543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.647647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.647661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.660810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.660824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.674432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.674446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.687726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.687740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.700457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.700471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.713340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.713359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 19149.75 IOPS, 149.61 MiB/s [2024-11-20T15:52:24.748Z] [2024-11-20 16:52:24.726167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.726181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.572 [2024-11-20 16:52:24.739392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.572 [2024-11-20 16:52:24.739406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.833 [2024-11-20 16:52:24.752260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.752275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.765177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.765192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.778640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.778654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.791938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.791952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.805581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.805595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.819048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.819062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.831934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.831948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.845425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.845439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.858370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.858383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.871739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.871754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.884594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.884609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.898095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.898110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.910637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.910651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.923947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.923961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.937314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.937328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.950847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.950862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.964124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.964138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.977177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.977191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:24.989748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:24.989762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.834 [2024-11-20 16:52:25.003115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.834 [2024-11-20 16:52:25.003130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.016224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.016239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.029912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.029926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.042450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.042464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.055823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.055837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.069191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.069205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.082097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.082111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.095582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.095596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.108516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.108531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.121966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.121981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.135755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.135769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.148940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.148954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.161147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.161166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.174317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.174331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.187486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.187500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.200383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.200398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.213740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.213756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.226474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.226488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.239713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.239727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.253168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.253183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.095 [2024-11-20 16:52:25.266312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.095 [2024-11-20 16:52:25.266327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.279191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.279206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.292278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.292292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.305185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.305201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.318856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.318870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.331464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.331478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.344901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.344916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.357766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.357781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.370958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.370972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.384075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.384090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.397707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.397722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.411073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.411087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.424214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.355 [2024-11-20 16:52:25.424229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.355 [2024-11-20 16:52:25.437659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.437673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.356 [2024-11-20 16:52:25.450778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.450792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.356 [2024-11-20 16:52:25.463974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.463989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.356 [2024-11-20 16:52:25.477497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.477512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.356 [2024-11-20 16:52:25.490890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.490905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.356 [2024-11-20 16:52:25.504233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.504247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.356 [2024-11-20 16:52:25.517367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.356 [2024-11-20 16:52:25.517381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.530614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.530630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.543895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.543910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.556561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.556575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.570285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.570300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.583642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.583657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.596618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.596633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.608998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.609012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.622041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.622056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.635357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.635372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.643380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.643394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.656579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.656593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.670097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.670112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.683307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.683321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.697065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.697084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.709929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.709943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.722528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.722542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 19154.00 IOPS, 149.64 MiB/s [2024-11-20T15:52:25.792Z] [2024-11-20 16:52:25.733773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.733787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 00:09:33.616 Latency(us) 00:09:33.616 [2024-11-20T15:52:25.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.616 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:33.616 Nvme1n1 : 5.01 19156.57 149.66 0.00 0.00 6675.61 3099.31 18896.21 00:09:33.616 [2024-11-20T15:52:25.792Z] =================================================================================================================== 00:09:33.616 [2024-11-20T15:52:25.792Z] Total : 19156.57 149.66 0.00 0.00 6675.61 3099.31 18896.21 00:09:33.616 [2024-11-20 16:52:25.744343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.744354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.756384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.756402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.768404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.768416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.616 [2024-11-20 16:52:25.780434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.616 [2024-11-20 16:52:25.780447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.876 [2024-11-20 16:52:25.792461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.876 [2024-11-20 16:52:25.792472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.876 [2024-11-20 16:52:25.804490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.876 [2024-11-20 16:52:25.804500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.876 [2024-11-20 16:52:25.816523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.876 [2024-11-20 16:52:25.816534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.876 [2024-11-20 16:52:25.828554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.876 [2024-11-20 16:52:25.828564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1808607) - No such process 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1808607 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.876 delay0 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:33.876 [2024-11-20 16:52:25.995304] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:42.024 Initializing NVMe Controllers 00:09:42.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:42.024 Initialization complete. Launching workers. 00:09:42.024 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 252, failed: 32703 00:09:42.024 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32821, failed to submit 134 00:09:42.024 success 32722, unsuccessful 99, failed 0 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.024 rmmod nvme_tcp 00:09:42.024 rmmod nvme_fabrics 00:09:42.024 rmmod nvme_keyring 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1806390 ']' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1806390 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1806390 ']' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1806390 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1806390 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1806390' 00:09:42.024 killing process with pid 1806390 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1806390 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1806390 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.024 16:52:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.537 00:09:43.537 real 0m34.544s 00:09:43.537 user 0m45.467s 00:09:43.537 sys 0m11.934s 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.537 ************************************ 00:09:43.537 END TEST nvmf_zcopy 00:09:43.537 ************************************ 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.537 ************************************ 00:09:43.537 START TEST nvmf_nmic 00:09:43.537 ************************************ 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:43.537 * Looking for test storage... 00:09:43.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:43.537 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.538 --rc genhtml_branch_coverage=1 00:09:43.538 --rc genhtml_function_coverage=1 00:09:43.538 --rc genhtml_legend=1 00:09:43.538 --rc geninfo_all_blocks=1 00:09:43.538 --rc geninfo_unexecuted_blocks=1 00:09:43.538 00:09:43.538 ' 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.538 --rc genhtml_branch_coverage=1 00:09:43.538 --rc genhtml_function_coverage=1 00:09:43.538 --rc genhtml_legend=1 00:09:43.538 --rc geninfo_all_blocks=1 00:09:43.538 --rc geninfo_unexecuted_blocks=1 00:09:43.538 00:09:43.538 ' 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.538 --rc genhtml_branch_coverage=1 00:09:43.538 --rc genhtml_function_coverage=1 00:09:43.538 --rc genhtml_legend=1 00:09:43.538 --rc geninfo_all_blocks=1 00:09:43.538 --rc geninfo_unexecuted_blocks=1 00:09:43.538 00:09:43.538 ' 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.538 --rc genhtml_branch_coverage=1 00:09:43.538 --rc genhtml_function_coverage=1 00:09:43.538 --rc genhtml_legend=1 00:09:43.538 --rc geninfo_all_blocks=1 00:09:43.538 --rc geninfo_unexecuted_blocks=1 00:09:43.538 00:09:43.538 ' 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.538 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.842 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.843 16:52:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:51.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:51.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:51.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.987 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:51.988 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.988 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:09:51.988 00:09:51.988 --- 10.0.0.2 ping statistics --- 00:09:51.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.988 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:09:51.988 00:09:51.988 --- 10.0.0.1 ping statistics --- 00:09:51.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.988 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1815465 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1815465 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1815465 ']' 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.988 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.988 [2024-11-20 16:52:43.327359] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:09:51.988 [2024-11-20 16:52:43.327426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.988 [2024-11-20 16:52:43.428810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.988 [2024-11-20 16:52:43.483146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.988 [2024-11-20 16:52:43.483217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.988 [2024-11-20 16:52:43.483226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.988 [2024-11-20 16:52:43.483233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.988 [2024-11-20 16:52:43.483239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.988 [2024-11-20 16:52:43.485203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.988 [2024-11-20 16:52:43.485306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.988 [2024-11-20 16:52:43.485447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.988 [2024-11-20 16:52:43.485448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.988 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.988 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:51.988 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.988 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.988 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.248 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.248 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.248 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 [2024-11-20 16:52:44.200021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 Malloc0 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 [2024-11-20 16:52:44.272948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:52.249 test case1: single bdev can't be used in multiple subsystems 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 [2024-11-20 16:52:44.308766] bdev.c:8473:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:52.249 [2024-11-20 16:52:44.308791] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:52.249 [2024-11-20 16:52:44.308800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.249 request: 00:09:52.249 { 00:09:52.249 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:52.249 "namespace": { 00:09:52.249 "bdev_name": "Malloc0", 00:09:52.249 "no_auto_visible": false, 00:09:52.249 "hide_metadata": false 00:09:52.249 }, 00:09:52.249 "method": "nvmf_subsystem_add_ns", 00:09:52.249 "req_id": 1 00:09:52.249 } 00:09:52.249 Got JSON-RPC error response 00:09:52.249 response: 00:09:52.249 { 00:09:52.249 "code": -32602, 00:09:52.249 "message": "Invalid parameters" 00:09:52.249 } 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:52.249 Adding namespace failed - expected result. 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:52.249 test case2: host connect to nvmf target in multiple paths 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 [2024-11-20 16:52:44.320992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.160 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:55.542 16:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.542 16:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:55.542 16:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.542 16:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:55.542 16:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:57.454 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:57.454 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:57.454 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.454 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:57.454 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.454 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:57.455 16:52:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:57.455 [global] 00:09:57.455 thread=1 00:09:57.455 invalidate=1 00:09:57.455 rw=write 00:09:57.455 time_based=1 00:09:57.455 runtime=1 00:09:57.455 ioengine=libaio 00:09:57.455 direct=1 00:09:57.455 bs=4096 00:09:57.455 iodepth=1 00:09:57.455 norandommap=0 00:09:57.455 numjobs=1 00:09:57.455 00:09:57.455 verify_dump=1 00:09:57.455 verify_backlog=512 00:09:57.455 verify_state_save=0 00:09:57.455 do_verify=1 00:09:57.455 verify=crc32c-intel 00:09:57.455 [job0] 00:09:57.455 filename=/dev/nvme0n1 00:09:57.455 Could not set queue depth (nvme0n1) 00:09:57.715 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.716 fio-3.35 00:09:57.716 Starting 1 thread 00:09:59.100 00:09:59.100 job0: (groupid=0, jobs=1): err= 0: pid=1816827: Wed Nov 20 16:52:51 2024 00:09:59.100 read: IOPS=331, BW=1327KiB/s (1359kB/s)(1364KiB/1028msec) 00:09:59.100 slat (nsec): min=6671, max=34971, avg=24343.36, stdev=7736.21 00:09:59.100 clat (usec): min=713, max=41057, avg=2353.37, stdev=7385.97 00:09:59.100 lat (usec): min=744, max=41084, avg=2377.71, stdev=7386.51 00:09:59.100 clat percentiles (usec): 00:09:59.100 | 1.00th=[ 758], 5.00th=[ 807], 10.00th=[ 832], 20.00th=[ 881], 00:09:59.100 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 971], 00:09:59.100 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:09:59.100 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.100 | 99.99th=[41157] 00:09:59.100 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:09:59.100 slat (usec): min=9, max=24941, avg=65.64, stdev=1101.58 00:09:59.100 clat (usec): min=115, max=3351, avg=348.08, stdev=221.51 00:09:59.100 lat (usec): min=126, max=25413, avg=413.72, stdev=1129.38 00:09:59.100 clat percentiles (usec): 00:09:59.100 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 155], 00:09:59.100 | 30.00th=[ 247], 40.00th=[ 302], 50.00th=[ 343], 60.00th=[ 363], 00:09:59.100 | 70.00th=[ 420], 80.00th=[ 478], 90.00th=[ 537], 95.00th=[ 578], 00:09:59.100 | 99.00th=[ 758], 99.50th=[ 1647], 99.90th=[ 3359], 99.95th=[ 3359], 00:09:59.100 | 99.99th=[ 3359] 00:09:59.100 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.100 lat (usec) : 250=18.05%, 500=32.94%, 750=8.68%, 1000=29.31% 00:09:59.100 lat (msec) : 2=9.50%, 4=0.12%, 50=1.41% 00:09:59.100 cpu : usr=1.36%, sys=1.85%, ctx=856, majf=0, minf=1 00:09:59.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.100 issued rwts: total=341,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.100 00:09:59.100 Run status group 0 (all jobs): 00:09:59.100 READ: bw=1327KiB/s (1359kB/s), 1327KiB/s-1327KiB/s (1359kB/s-1359kB/s), io=1364KiB (1397kB), run=1028-1028msec 00:09:59.100 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:09:59.100 00:09:59.100 Disk stats (read/write): 00:09:59.100 nvme0n1: ios=389/512, merge=0/0, ticks=1051/175, in_queue=1226, util=98.70% 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.100 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.100 rmmod nvme_tcp 00:09:59.100 rmmod nvme_fabrics 00:09:59.360 rmmod nvme_keyring 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1815465 ']' 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1815465 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1815465 ']' 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1815465 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815465 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815465' 00:09:59.360 killing process with pid 1815465 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1815465 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1815465 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.360 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.361 16:52:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.905 00:10:01.905 real 0m18.118s 00:10:01.905 user 0m50.995s 00:10:01.905 sys 0m6.620s 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.905 ************************************ 00:10:01.905 END TEST nvmf_nmic 00:10:01.905 ************************************ 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.905 ************************************ 00:10:01.905 START TEST nvmf_fio_target 00:10:01.905 ************************************ 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:01.905 * Looking for test storage... 00:10:01.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.905 --rc genhtml_branch_coverage=1 00:10:01.905 --rc genhtml_function_coverage=1 00:10:01.905 --rc genhtml_legend=1 00:10:01.905 --rc geninfo_all_blocks=1 00:10:01.905 --rc geninfo_unexecuted_blocks=1 00:10:01.905 00:10:01.905 ' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.905 --rc genhtml_branch_coverage=1 00:10:01.905 --rc genhtml_function_coverage=1 00:10:01.905 --rc genhtml_legend=1 00:10:01.905 --rc geninfo_all_blocks=1 00:10:01.905 --rc geninfo_unexecuted_blocks=1 00:10:01.905 00:10:01.905 ' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.905 --rc genhtml_branch_coverage=1 00:10:01.905 --rc genhtml_function_coverage=1 00:10:01.905 --rc genhtml_legend=1 00:10:01.905 --rc geninfo_all_blocks=1 00:10:01.905 --rc geninfo_unexecuted_blocks=1 00:10:01.905 00:10:01.905 ' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.905 --rc genhtml_branch_coverage=1 00:10:01.905 --rc genhtml_function_coverage=1 00:10:01.905 --rc genhtml_legend=1 00:10:01.905 --rc geninfo_all_blocks=1 00:10:01.905 --rc geninfo_unexecuted_blocks=1 00:10:01.905 00:10:01.905 ' 00:10:01.905 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.906 16:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:10.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.048 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:10.049 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:10.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:10.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:10:10.049 00:10:10.049 --- 10.0.0.2 ping statistics --- 00:10:10.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.049 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:10:10.049 00:10:10.049 --- 10.0.0.1 ping statistics --- 00:10:10.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.049 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1821382 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1821382 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1821382 ']' 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.049 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.049 [2024-11-20 16:53:01.501882] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:10:10.049 [2024-11-20 16:53:01.501950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.049 [2024-11-20 16:53:01.601618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.049 [2024-11-20 16:53:01.654904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.049 [2024-11-20 16:53:01.654956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.049 [2024-11-20 16:53:01.654964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.049 [2024-11-20 16:53:01.654971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.049 [2024-11-20 16:53:01.654978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.049 [2024-11-20 16:53:01.657004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.049 [2024-11-20 16:53:01.657191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.049 [2024-11-20 16:53:01.657295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.049 [2024-11-20 16:53:01.657295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.310 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:10.571 [2024-11-20 16:53:02.543684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.571 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.832 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:10.832 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.093 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:11.093 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.093 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:11.093 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.354 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:11.354 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:11.614 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.874 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:11.874 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.874 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:11.874 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.135 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:12.135 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:12.395 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.656 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.656 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.656 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.656 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.916 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.177 [2024-11-20 16:53:05.116249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.177 16:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:13.177 16:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:13.438 16:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.351 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:15.351 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:15.351 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.351 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:15.351 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:15.351 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:17.263 16:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:17.263 [global] 00:10:17.263 thread=1 00:10:17.263 invalidate=1 00:10:17.263 rw=write 00:10:17.263 time_based=1 00:10:17.263 runtime=1 00:10:17.263 ioengine=libaio 00:10:17.263 direct=1 00:10:17.263 bs=4096 00:10:17.263 iodepth=1 00:10:17.263 norandommap=0 00:10:17.263 numjobs=1 00:10:17.263 00:10:17.263 verify_dump=1 00:10:17.263 verify_backlog=512 00:10:17.263 verify_state_save=0 00:10:17.263 do_verify=1 00:10:17.263 verify=crc32c-intel 00:10:17.263 [job0] 00:10:17.263 filename=/dev/nvme0n1 00:10:17.263 [job1] 00:10:17.263 filename=/dev/nvme0n2 00:10:17.263 [job2] 00:10:17.263 filename=/dev/nvme0n3 00:10:17.263 [job3] 00:10:17.263 filename=/dev/nvme0n4 00:10:17.263 Could not set queue depth (nvme0n1) 00:10:17.263 Could not set queue depth (nvme0n2) 00:10:17.263 Could not set queue depth (nvme0n3) 00:10:17.263 Could not set queue depth (nvme0n4) 00:10:17.523 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.523 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.523 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.523 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.523 fio-3.35 00:10:17.523 Starting 4 threads 00:10:18.906 00:10:18.906 job0: (groupid=0, jobs=1): err= 0: pid=1823301: Wed Nov 20 16:53:10 2024 00:10:18.906 read: IOPS=61, BW=248KiB/s (253kB/s)(252KiB/1018msec) 00:10:18.906 slat (nsec): min=6953, max=45406, avg=23478.17, stdev=7734.93 00:10:18.906 clat (usec): min=641, max=41207, avg=10979.08, stdev=17646.53 00:10:18.906 lat (usec): min=648, max=41234, avg=11002.56, stdev=17648.35 00:10:18.906 clat percentiles (usec): 00:10:18.906 | 1.00th=[ 644], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 734], 00:10:18.906 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 816], 00:10:18.906 | 70.00th=[ 889], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:18.906 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:18.906 | 99.99th=[41157] 00:10:18.906 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:18.906 slat (nsec): min=9447, max=61765, avg=28470.87, stdev=10313.64 00:10:18.906 clat (usec): min=215, max=4280, avg=599.24, stdev=201.86 00:10:18.906 lat (usec): min=226, max=4313, avg=627.71, stdev=204.78 00:10:18.906 clat percentiles (usec): 00:10:18.906 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 490], 00:10:18.906 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:10:18.906 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 783], 00:10:18.906 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 4293], 99.95th=[ 4293], 00:10:18.906 | 99.99th=[ 4293] 00:10:18.906 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.906 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.906 lat (usec) : 250=0.17%, 500=19.65%, 750=63.48%, 1000=13.74% 00:10:18.906 lat (msec) : 10=0.17%, 50=2.78% 00:10:18.906 cpu : usr=0.49%, sys=1.77%, ctx=575, majf=0, minf=1 00:10:18.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.906 issued rwts: total=63,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.906 job1: (groupid=0, jobs=1): err= 0: pid=1823302: Wed Nov 20 16:53:10 2024 00:10:18.906 read: IOPS=18, BW=73.4KiB/s (75.1kB/s)(76.0KiB/1036msec) 00:10:18.906 slat (nsec): min=25381, max=29812, avg=25968.63, stdev=990.44 00:10:18.906 clat (usec): min=827, max=42032, avg=37242.45, stdev=12805.34 00:10:18.906 lat (usec): min=857, max=42057, avg=37268.42, stdev=12804.72 00:10:18.906 clat percentiles (usec): 00:10:18.906 | 1.00th=[ 832], 5.00th=[ 832], 10.00th=[ 1029], 20.00th=[41157], 00:10:18.906 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:18.906 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:18.906 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:18.906 | 99.99th=[42206] 00:10:18.906 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:18.906 slat (nsec): min=9697, max=53686, avg=28238.26, stdev=9680.64 00:10:18.906 clat (usec): min=330, max=1201, avg=606.15, stdev=116.41 00:10:18.906 lat (usec): min=346, max=1234, avg=634.39, stdev=120.76 00:10:18.906 clat percentiles (usec): 00:10:18.906 | 1.00th=[ 347], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 494], 00:10:18.906 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:10:18.906 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:10:18.906 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:18.907 | 99.99th=[ 1205] 00:10:18.907 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.907 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.907 lat (usec) : 500=20.90%, 750=68.17%, 1000=7.34% 00:10:18.907 lat (msec) : 2=0.38%, 50=3.20% 00:10:18.907 cpu : usr=0.58%, sys=1.55%, ctx=531, majf=0, minf=1 00:10:18.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.907 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.907 job2: (groupid=0, jobs=1): err= 0: pid=1823303: Wed Nov 20 16:53:10 2024 00:10:18.907 read: IOPS=17, BW=69.4KiB/s (71.1kB/s)(72.0KiB/1037msec) 00:10:18.907 slat (nsec): min=25336, max=26592, avg=25880.33, stdev=328.60 00:10:18.907 clat (usec): min=1127, max=42012, avg=39388.45, stdev=9558.62 00:10:18.907 lat (usec): min=1152, max=42037, avg=39414.33, stdev=9558.71 00:10:18.907 clat percentiles (usec): 00:10:18.907 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41157], 00:10:18.907 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:18.907 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:18.907 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:18.907 | 99.99th=[42206] 00:10:18.907 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:18.907 slat (nsec): min=9632, max=53244, avg=29645.75, stdev=9570.86 00:10:18.907 clat (usec): min=162, max=1158, avg=603.69, stdev=117.68 00:10:18.907 lat (usec): min=173, max=1192, avg=633.34, stdev=121.90 00:10:18.907 clat percentiles (usec): 00:10:18.907 | 1.00th=[ 330], 5.00th=[ 392], 10.00th=[ 445], 20.00th=[ 494], 00:10:18.907 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:10:18.907 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 766], 00:10:18.907 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:18.907 | 99.99th=[ 1156] 00:10:18.907 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.907 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.907 lat (usec) : 250=0.19%, 500=20.38%, 750=69.43%, 1000=6.42% 00:10:18.907 lat (msec) : 2=0.38%, 50=3.21% 00:10:18.907 cpu : usr=0.68%, sys=1.45%, ctx=530, majf=0, minf=1 00:10:18.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.907 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.907 job3: (groupid=0, jobs=1): err= 0: pid=1823304: Wed Nov 20 16:53:10 2024 00:10:18.907 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:10:18.907 slat (nsec): min=26625, max=30251, avg=27338.89, stdev=920.83 00:10:18.907 clat (usec): min=946, max=42044, avg=37519.06, stdev=12884.77 00:10:18.907 lat (usec): min=977, max=42071, avg=37546.40, stdev=12884.28 00:10:18.907 clat percentiles (usec): 00:10:18.907 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[41157], 00:10:18.907 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:18.907 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:18.907 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:18.907 | 99.99th=[42206] 00:10:18.907 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:18.907 slat (nsec): min=10209, max=56192, avg=32512.11, stdev=8844.33 00:10:18.907 clat (usec): min=223, max=932, avg=598.56, stdev=118.16 00:10:18.907 lat (usec): min=235, max=967, avg=631.08, stdev=120.22 00:10:18.907 clat percentiles (usec): 00:10:18.907 | 1.00th=[ 326], 5.00th=[ 408], 10.00th=[ 453], 20.00th=[ 502], 00:10:18.907 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:10:18.907 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 799], 00:10:18.907 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 930], 99.95th=[ 930], 00:10:18.907 | 99.99th=[ 930] 00:10:18.907 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.907 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.907 lat (usec) : 250=0.19%, 500=18.83%, 750=67.42%, 1000=10.36% 00:10:18.907 lat (msec) : 50=3.20% 00:10:18.907 cpu : usr=0.58%, sys=1.73%, ctx=533, majf=0, minf=1 00:10:18.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.907 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.907 00:10:18.907 Run status group 0 (all jobs): 00:10:18.907 READ: bw=458KiB/s (469kB/s), 69.4KiB/s-248KiB/s (71.1kB/s-253kB/s), io=476KiB (487kB), run=1018-1040msec 00:10:18.907 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2012KiB/s (2016kB/s-2060kB/s), io=8192KiB (8389kB), run=1018-1040msec 00:10:18.907 00:10:18.907 Disk stats (read/write): 00:10:18.907 nvme0n1: ios=106/512, merge=0/0, ticks=554/290, in_queue=844, util=89.48% 00:10:18.907 nvme0n2: ios=18/512, merge=0/0, ticks=666/301, in_queue=967, util=84.38% 00:10:18.907 nvme0n3: ios=17/512, merge=0/0, ticks=668/298, in_queue=966, util=89.33% 00:10:18.907 nvme0n4: ios=75/512, merge=0/0, ticks=1398/289, in_queue=1687, util=99.02% 00:10:18.907 16:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:18.907 [global] 00:10:18.907 thread=1 00:10:18.907 invalidate=1 00:10:18.907 rw=randwrite 00:10:18.907 time_based=1 00:10:18.907 runtime=1 00:10:18.907 ioengine=libaio 00:10:18.907 direct=1 00:10:18.907 bs=4096 00:10:18.907 iodepth=1 00:10:18.907 norandommap=0 00:10:18.907 numjobs=1 00:10:18.907 00:10:18.907 verify_dump=1 00:10:18.907 verify_backlog=512 00:10:18.907 verify_state_save=0 00:10:18.907 do_verify=1 00:10:18.907 verify=crc32c-intel 00:10:18.907 [job0] 00:10:18.907 filename=/dev/nvme0n1 00:10:18.907 [job1] 00:10:18.907 filename=/dev/nvme0n2 00:10:18.907 [job2] 00:10:18.907 filename=/dev/nvme0n3 00:10:18.907 [job3] 00:10:18.907 filename=/dev/nvme0n4 00:10:18.907 Could not set queue depth (nvme0n1) 00:10:18.907 Could not set queue depth (nvme0n2) 00:10:18.907 Could not set queue depth (nvme0n3) 00:10:18.907 Could not set queue depth (nvme0n4) 00:10:19.168 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.168 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.168 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.168 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.168 fio-3.35 00:10:19.168 Starting 4 threads 00:10:20.549 00:10:20.549 job0: (groupid=0, jobs=1): err= 0: pid=1823828: Wed Nov 20 16:53:12 2024 00:10:20.549 read: IOPS=18, BW=73.2KiB/s (75.0kB/s)(76.0KiB/1038msec) 00:10:20.549 slat (nsec): min=26233, max=26760, avg=26404.05, stdev=152.64 00:10:20.549 clat (usec): min=40883, max=41051, avg=40961.93, stdev=38.49 00:10:20.549 lat (usec): min=40910, max=41078, avg=40988.33, stdev=38.46 00:10:20.549 clat percentiles (usec): 00:10:20.549 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:20.549 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.549 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.549 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.549 | 99.99th=[41157] 00:10:20.549 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:20.549 slat (nsec): min=9278, max=70285, avg=27433.63, stdev=9772.47 00:10:20.549 clat (usec): min=114, max=1000, avg=471.91, stdev=122.01 00:10:20.549 lat (usec): min=124, max=1030, avg=499.35, stdev=125.08 00:10:20.549 clat percentiles (usec): 00:10:20.549 | 1.00th=[ 233], 5.00th=[ 281], 10.00th=[ 318], 20.00th=[ 367], 00:10:20.549 | 30.00th=[ 400], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 494], 00:10:20.549 | 70.00th=[ 523], 80.00th=[ 570], 90.00th=[ 635], 95.00th=[ 676], 00:10:20.549 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 1004], 99.95th=[ 1004], 00:10:20.549 | 99.99th=[ 1004] 00:10:20.549 bw ( KiB/s): min= 4096, max= 4096, per=45.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.549 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.549 lat (usec) : 250=1.69%, 500=59.51%, 750=33.33%, 1000=1.69% 00:10:20.549 lat (msec) : 2=0.19%, 50=3.58% 00:10:20.549 cpu : usr=0.58%, sys=1.54%, ctx=532, majf=0, minf=1 00:10:20.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.549 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.549 job1: (groupid=0, jobs=1): err= 0: pid=1823829: Wed Nov 20 16:53:12 2024 00:10:20.549 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:10:20.549 slat (nsec): min=23926, max=25820, avg=25019.72, stdev=373.68 00:10:20.549 clat (usec): min=1166, max=42186, avg=39560.92, stdev=9588.41 00:10:20.549 lat (usec): min=1191, max=42211, avg=39585.94, stdev=9588.40 00:10:20.549 clat percentiles (usec): 00:10:20.549 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41157], 00:10:20.549 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:20.549 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:20.549 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.549 | 99.99th=[42206] 00:10:20.549 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:20.549 slat (nsec): min=9410, max=49626, avg=28527.49, stdev=7919.43 00:10:20.549 clat (usec): min=237, max=988, avg=594.36, stdev=133.18 00:10:20.549 lat (usec): min=252, max=1019, avg=622.88, stdev=135.85 00:10:20.549 clat percentiles (usec): 00:10:20.549 | 1.00th=[ 262], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 482], 00:10:20.549 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:10:20.549 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 791], 00:10:20.549 | 99.00th=[ 873], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:10:20.549 | 99.99th=[ 988] 00:10:20.549 bw ( KiB/s): min= 4096, max= 4096, per=45.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.549 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.549 lat (usec) : 250=0.57%, 500=22.83%, 750=64.53%, 1000=8.68% 00:10:20.549 lat (msec) : 2=0.19%, 50=3.21% 00:10:20.549 cpu : usr=0.68%, sys=1.45%, ctx=530, majf=0, minf=1 00:10:20.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.549 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.549 job2: (groupid=0, jobs=1): err= 0: pid=1823832: Wed Nov 20 16:53:12 2024 00:10:20.549 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:20.549 slat (nsec): min=26708, max=28436, avg=27391.83, stdev=335.38 00:10:20.549 clat (usec): min=668, max=1246, avg=961.23, stdev=57.62 00:10:20.549 lat (usec): min=696, max=1273, avg=988.62, stdev=57.59 00:10:20.549 clat percentiles (usec): 00:10:20.549 | 1.00th=[ 783], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 938], 00:10:20.549 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:10:20.549 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1045], 00:10:20.549 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:20.549 | 99.99th=[ 1254] 00:10:20.549 write: IOPS=798, BW=3193KiB/s (3269kB/s)(3196KiB/1001msec); 0 zone resets 00:10:20.549 slat (nsec): min=9272, max=95781, avg=30749.60, stdev=10103.23 00:10:20.549 clat (usec): min=193, max=3692, avg=574.68, stdev=153.62 00:10:20.549 lat (usec): min=203, max=3726, avg=605.43, stdev=156.56 00:10:20.549 clat percentiles (usec): 00:10:20.550 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 478], 00:10:20.550 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:10:20.550 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 734], 00:10:20.550 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 3687], 99.95th=[ 3687], 00:10:20.550 | 99.99th=[ 3687] 00:10:20.550 bw ( KiB/s): min= 4096, max= 4096, per=45.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.550 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.550 lat (usec) : 250=0.23%, 500=15.41%, 750=43.86%, 1000=33.94% 00:10:20.550 lat (msec) : 2=6.48%, 4=0.08% 00:10:20.550 cpu : usr=3.70%, sys=4.10%, ctx=1313, majf=0, minf=1 00:10:20.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.550 issued rwts: total=512,799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.550 job3: (groupid=0, jobs=1): err= 0: pid=1823833: Wed Nov 20 16:53:12 2024 00:10:20.550 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:10:20.550 slat (nsec): min=8600, max=29285, avg=27055.68, stdev=4151.19 00:10:20.550 clat (usec): min=989, max=42033, avg=34393.59, stdev=16112.62 00:10:20.550 lat (usec): min=1010, max=42061, avg=34420.65, stdev=16114.70 00:10:20.550 clat percentiles (usec): 00:10:20.550 | 1.00th=[ 988], 5.00th=[ 1004], 10.00th=[ 1004], 20.00th=[41157], 00:10:20.550 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:20.550 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:20.550 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.550 | 99.99th=[42206] 00:10:20.550 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:20.550 slat (nsec): min=9412, max=53033, avg=23312.31, stdev=12373.44 00:10:20.550 clat (usec): min=147, max=959, avg=456.86, stdev=166.27 00:10:20.550 lat (usec): min=157, max=994, avg=480.17, stdev=175.22 00:10:20.550 clat percentiles (usec): 00:10:20.550 | 1.00th=[ 215], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 293], 00:10:20.550 | 30.00th=[ 330], 40.00th=[ 367], 50.00th=[ 420], 60.00th=[ 482], 00:10:20.550 | 70.00th=[ 545], 80.00th=[ 619], 90.00th=[ 693], 95.00th=[ 750], 00:10:20.550 | 99.00th=[ 857], 99.50th=[ 938], 99.90th=[ 963], 99.95th=[ 963], 00:10:20.550 | 99.99th=[ 963] 00:10:20.550 bw ( KiB/s): min= 4096, max= 4096, per=45.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.550 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.550 lat (usec) : 250=2.62%, 500=58.24%, 750=30.15%, 1000=5.06% 00:10:20.550 lat (msec) : 2=0.56%, 50=3.37% 00:10:20.550 cpu : usr=1.09%, sys=0.99%, ctx=537, majf=0, minf=1 00:10:20.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.550 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.550 00:10:20.550 Run status group 0 (all jobs): 00:10:20.550 READ: bw=2200KiB/s (2253kB/s), 69.5KiB/s-2046KiB/s (71.2kB/s-2095kB/s), io=2284KiB (2339kB), run=1001-1038msec 00:10:20.550 WRITE: bw=8998KiB/s (9214kB/s), 1973KiB/s-3193KiB/s (2020kB/s-3269kB/s), io=9340KiB (9564kB), run=1001-1038msec 00:10:20.550 00:10:20.550 Disk stats (read/write): 00:10:20.550 nvme0n1: ios=64/512, merge=0/0, ticks=630/233, in_queue=863, util=87.58% 00:10:20.550 nvme0n2: ios=42/512, merge=0/0, ticks=541/295, in_queue=836, util=86.63% 00:10:20.550 nvme0n3: ios=546/529, merge=0/0, ticks=1284/239, in_queue=1523, util=97.57% 00:10:20.550 nvme0n4: ios=51/512, merge=0/0, ticks=1256/200, in_queue=1456, util=97.54% 00:10:20.550 16:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:20.550 [global] 00:10:20.550 thread=1 00:10:20.550 invalidate=1 00:10:20.550 rw=write 00:10:20.550 time_based=1 00:10:20.550 runtime=1 00:10:20.550 ioengine=libaio 00:10:20.550 direct=1 00:10:20.550 bs=4096 00:10:20.550 iodepth=128 00:10:20.550 norandommap=0 00:10:20.550 numjobs=1 00:10:20.550 00:10:20.550 verify_dump=1 00:10:20.550 verify_backlog=512 00:10:20.550 verify_state_save=0 00:10:20.550 do_verify=1 00:10:20.550 verify=crc32c-intel 00:10:20.550 [job0] 00:10:20.550 filename=/dev/nvme0n1 00:10:20.550 [job1] 00:10:20.550 filename=/dev/nvme0n2 00:10:20.550 [job2] 00:10:20.550 filename=/dev/nvme0n3 00:10:20.550 [job3] 00:10:20.550 filename=/dev/nvme0n4 00:10:20.550 Could not set queue depth (nvme0n1) 00:10:20.550 Could not set queue depth (nvme0n2) 00:10:20.550 Could not set queue depth (nvme0n3) 00:10:20.550 Could not set queue depth (nvme0n4) 00:10:20.810 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.810 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.810 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.810 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.810 fio-3.35 00:10:20.810 Starting 4 threads 00:10:22.195 00:10:22.195 job0: (groupid=0, jobs=1): err= 0: pid=1824347: Wed Nov 20 16:53:14 2024 00:10:22.195 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:22.195 slat (nsec): min=889, max=17854k, avg=124976.02, stdev=857269.30 00:10:22.195 clat (usec): min=5475, max=44379, avg=14951.88, stdev=8979.44 00:10:22.195 lat (usec): min=5484, max=45273, avg=15076.85, stdev=9068.67 00:10:22.195 clat percentiles (usec): 00:10:22.195 | 1.00th=[ 6521], 5.00th=[ 7373], 10.00th=[ 7963], 20.00th=[ 8291], 00:10:22.195 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[13435], 00:10:22.195 | 70.00th=[19006], 80.00th=[23200], 90.00th=[29492], 95.00th=[33817], 00:10:22.195 | 99.00th=[40109], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:10:22.195 | 99.99th=[44303] 00:10:22.195 write: IOPS=4364, BW=17.0MiB/s (17.9MB/s)(17.2MiB/1006msec); 0 zone resets 00:10:22.195 slat (nsec): min=1557, max=15812k, avg=107119.27, stdev=561474.46 00:10:22.195 clat (usec): min=1179, max=78326, avg=15114.09, stdev=12037.12 00:10:22.195 lat (usec): min=1189, max=78335, avg=15221.21, stdev=12107.42 00:10:22.195 clat percentiles (usec): 00:10:22.195 | 1.00th=[ 6390], 5.00th=[ 7111], 10.00th=[ 7177], 20.00th=[ 7439], 00:10:22.195 | 30.00th=[ 8160], 40.00th=[11076], 50.00th=[12256], 60.00th=[12518], 00:10:22.195 | 70.00th=[12780], 80.00th=[17695], 90.00th=[30802], 95.00th=[37487], 00:10:22.195 | 99.00th=[67634], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:10:22.195 | 99.99th=[78119] 00:10:22.195 bw ( KiB/s): min=16384, max=17720, per=19.81%, avg=17052.00, stdev=944.69, samples=2 00:10:22.195 iops : min= 4096, max= 4430, avg=4263.00, stdev=236.17, samples=2 00:10:22.195 lat (msec) : 2=0.11%, 4=0.01%, 10=44.22%, 20=34.69%, 50=19.29% 00:10:22.195 lat (msec) : 100=1.68% 00:10:22.195 cpu : usr=3.38%, sys=3.38%, ctx=500, majf=0, minf=1 00:10:22.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:22.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.195 issued rwts: total=4096,4391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.195 job1: (groupid=0, jobs=1): err= 0: pid=1824358: Wed Nov 20 16:53:14 2024 00:10:22.195 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec) 00:10:22.195 slat (nsec): min=1017, max=11480k, avg=63144.49, stdev=455666.74 00:10:22.195 clat (usec): min=4150, max=23824, avg=8418.78, stdev=2404.73 00:10:22.195 lat (usec): min=4155, max=23834, avg=8481.93, stdev=2437.43 00:10:22.195 clat percentiles (usec): 00:10:22.195 | 1.00th=[ 5145], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6456], 00:10:22.195 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8586], 00:10:22.195 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[11469], 95.00th=[12518], 00:10:22.195 | 99.00th=[16581], 99.50th=[19268], 99.90th=[23200], 99.95th=[23725], 00:10:22.195 | 99.99th=[23725] 00:10:22.195 write: IOPS=7505, BW=29.3MiB/s (30.7MB/s)(29.5MiB/1007msec); 0 zone resets 00:10:22.195 slat (nsec): min=1712, max=15837k, avg=66411.32, stdev=425676.36 00:10:22.195 clat (usec): min=1866, max=28874, avg=8884.86, stdev=4961.88 00:10:22.195 lat (usec): min=1894, max=28879, avg=8951.27, stdev=4994.16 00:10:22.195 clat percentiles (usec): 00:10:22.195 | 1.00th=[ 3064], 5.00th=[ 3818], 10.00th=[ 4424], 20.00th=[ 5735], 00:10:22.195 | 30.00th=[ 6128], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 7504], 00:10:22.195 | 70.00th=[ 9634], 80.00th=[12256], 90.00th=[17171], 95.00th=[19006], 00:10:22.195 | 99.00th=[25035], 99.50th=[27395], 99.90th=[28443], 99.95th=[28967], 00:10:22.195 | 99.99th=[28967] 00:10:22.195 bw ( KiB/s): min=24776, max=34672, per=34.53%, avg=29724.00, stdev=6997.53, samples=2 00:10:22.195 iops : min= 6194, max= 8668, avg=7431.00, stdev=1749.38, samples=2 00:10:22.195 lat (msec) : 2=0.01%, 4=3.87%, 10=72.15%, 20=21.85%, 50=2.12% 00:10:22.195 cpu : usr=5.27%, sys=9.15%, ctx=489, majf=0, minf=1 00:10:22.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:22.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.195 issued rwts: total=7168,7558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.195 job2: (groupid=0, jobs=1): err= 0: pid=1824360: Wed Nov 20 16:53:14 2024 00:10:22.195 read: IOPS=5960, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1004msec) 00:10:22.195 slat (nsec): min=1012, max=8539.2k, avg=71367.86, stdev=494177.58 00:10:22.195 clat (usec): min=1492, max=33314, avg=9338.09, stdev=3735.71 00:10:22.195 lat (usec): min=1505, max=33324, avg=9409.45, stdev=3773.99 00:10:22.195 clat percentiles (usec): 00:10:22.195 | 1.00th=[ 3163], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 7177], 00:10:22.195 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:10:22.195 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[13173], 95.00th=[17171], 00:10:22.195 | 99.00th=[24511], 99.50th=[26346], 99.90th=[32900], 99.95th=[33424], 00:10:22.195 | 99.99th=[33424] 00:10:22.195 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:22.195 slat (nsec): min=1713, max=6885.6k, avg=83217.95, stdev=397812.42 00:10:22.195 clat (usec): min=1208, max=33277, avg=11608.79, stdev=6275.53 00:10:22.195 lat (usec): min=1220, max=33280, avg=11692.01, stdev=6315.08 00:10:22.195 clat percentiles (usec): 00:10:22.195 | 1.00th=[ 2540], 5.00th=[ 4047], 10.00th=[ 4752], 20.00th=[ 6325], 00:10:22.195 | 30.00th=[ 7046], 40.00th=[ 8979], 50.00th=[11600], 60.00th=[12387], 00:10:22.195 | 70.00th=[12649], 80.00th=[15401], 90.00th=[22152], 95.00th=[24511], 00:10:22.195 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:10:22.195 | 99.99th=[33162] 00:10:22.195 bw ( KiB/s): min=22576, max=26576, per=28.55%, avg=24576.00, stdev=2828.43, samples=2 00:10:22.195 iops : min= 5644, max= 6644, avg=6144.00, stdev=707.11, samples=2 00:10:22.195 lat (msec) : 2=0.35%, 4=3.45%, 10=54.56%, 20=33.31%, 50=8.34% 00:10:22.195 cpu : usr=3.99%, sys=5.58%, ctx=632, majf=0, minf=1 00:10:22.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:22.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.195 issued rwts: total=5984,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.195 job3: (groupid=0, jobs=1): err= 0: pid=1824361: Wed Nov 20 16:53:14 2024 00:10:22.196 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:22.196 slat (nsec): min=924, max=12065k, avg=145890.16, stdev=865374.21 00:10:22.196 clat (usec): min=6107, max=40721, avg=18146.61, stdev=9660.37 00:10:22.196 lat (usec): min=6113, max=40934, avg=18292.50, stdev=9747.71 00:10:22.196 clat percentiles (usec): 00:10:22.196 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9241], 00:10:22.196 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[14484], 60.00th=[21890], 00:10:22.196 | 70.00th=[25560], 80.00th=[28443], 90.00th=[32113], 95.00th=[33817], 00:10:22.196 | 99.00th=[36963], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:10:22.196 | 99.99th=[40633] 00:10:22.196 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:22.196 slat (nsec): min=1587, max=16289k, avg=149557.29, stdev=760593.55 00:10:22.196 clat (usec): min=1672, max=59292, avg=19881.62, stdev=13752.51 00:10:22.196 lat (usec): min=5284, max=59307, avg=20031.18, stdev=13847.26 00:10:22.196 clat percentiles (usec): 00:10:22.196 | 1.00th=[ 5538], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 9372], 00:10:22.196 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12911], 60.00th=[17171], 00:10:22.196 | 70.00th=[21365], 80.00th=[28443], 90.00th=[49021], 95.00th=[51119], 00:10:22.196 | 99.00th=[53740], 99.50th=[54789], 99.90th=[59507], 99.95th=[59507], 00:10:22.196 | 99.99th=[59507] 00:10:22.196 bw ( KiB/s): min= 9272, max=18320, per=16.03%, avg=13796.00, stdev=6397.90, samples=2 00:10:22.196 iops : min= 2318, max= 4580, avg=3449.00, stdev=1599.48, samples=2 00:10:22.196 lat (msec) : 2=0.02%, 10=32.82%, 20=30.97%, 50=32.10%, 100=4.11% 00:10:22.196 cpu : usr=3.39%, sys=2.99%, ctx=383, majf=0, minf=1 00:10:22.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:22.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.196 issued rwts: total=3072,3577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.196 00:10:22.196 Run status group 0 (all jobs): 00:10:22.196 READ: bw=78.8MiB/s (82.7MB/s), 12.0MiB/s-27.8MiB/s (12.5MB/s-29.2MB/s), io=79.4MiB (83.2MB), run=1004-1007msec 00:10:22.196 WRITE: bw=84.1MiB/s (88.1MB/s), 13.9MiB/s-29.3MiB/s (14.6MB/s-30.7MB/s), io=84.6MiB (88.8MB), run=1004-1007msec 00:10:22.196 00:10:22.196 Disk stats (read/write): 00:10:22.196 nvme0n1: ios=3122/3376, merge=0/0, ticks=25889/26193, in_queue=52082, util=95.59% 00:10:22.196 nvme0n2: ios=6189/6615, merge=0/0, ticks=48902/50983, in_queue=99885, util=97.76% 00:10:22.196 nvme0n3: ios=5174/5407, merge=0/0, ticks=46290/55291, in_queue=101581, util=97.36% 00:10:22.196 nvme0n4: ios=2425/2560, merge=0/0, ticks=24411/26757, in_queue=51168, util=89.42% 00:10:22.196 16:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:22.196 [global] 00:10:22.196 thread=1 00:10:22.196 invalidate=1 00:10:22.196 rw=randwrite 00:10:22.196 time_based=1 00:10:22.196 runtime=1 00:10:22.196 ioengine=libaio 00:10:22.196 direct=1 00:10:22.196 bs=4096 00:10:22.196 iodepth=128 00:10:22.196 norandommap=0 00:10:22.196 numjobs=1 00:10:22.196 00:10:22.196 verify_dump=1 00:10:22.196 verify_backlog=512 00:10:22.196 verify_state_save=0 00:10:22.196 do_verify=1 00:10:22.196 verify=crc32c-intel 00:10:22.196 [job0] 00:10:22.196 filename=/dev/nvme0n1 00:10:22.196 [job1] 00:10:22.196 filename=/dev/nvme0n2 00:10:22.196 [job2] 00:10:22.196 filename=/dev/nvme0n3 00:10:22.196 [job3] 00:10:22.196 filename=/dev/nvme0n4 00:10:22.196 Could not set queue depth (nvme0n1) 00:10:22.196 Could not set queue depth (nvme0n2) 00:10:22.196 Could not set queue depth (nvme0n3) 00:10:22.196 Could not set queue depth (nvme0n4) 00:10:22.456 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.456 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.456 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.456 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.456 fio-3.35 00:10:22.456 Starting 4 threads 00:10:23.840 00:10:23.841 job0: (groupid=0, jobs=1): err= 0: pid=1824815: Wed Nov 20 16:53:15 2024 00:10:23.841 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:10:23.841 slat (nsec): min=931, max=13595k, avg=72021.43, stdev=533806.87 00:10:23.841 clat (usec): min=1646, max=51363, avg=10284.14, stdev=6459.73 00:10:23.841 lat (usec): min=1660, max=51391, avg=10356.16, stdev=6506.51 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 3982], 5.00th=[ 5866], 10.00th=[ 6849], 20.00th=[ 7439], 00:10:23.841 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:10:23.841 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12911], 95.00th=[27395], 00:10:23.841 | 99.00th=[38536], 99.50th=[41157], 99.90th=[43254], 99.95th=[45351], 00:10:23.841 | 99.99th=[51119] 00:10:23.841 write: IOPS=6994, BW=27.3MiB/s (28.6MB/s)(27.3MiB/1001msec); 0 zone resets 00:10:23.841 slat (nsec): min=1558, max=12320k, avg=58485.68, stdev=432958.39 00:10:23.841 clat (usec): min=881, max=42681, avg=8351.50, stdev=3631.66 00:10:23.841 lat (usec): min=1021, max=42688, avg=8409.98, stdev=3672.18 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 2671], 5.00th=[ 4293], 10.00th=[ 5211], 20.00th=[ 6587], 00:10:23.841 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8225], 00:10:23.841 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[15926], 00:10:23.841 | 99.00th=[20841], 99.50th=[27919], 99.90th=[35914], 99.95th=[42730], 00:10:23.841 | 99.99th=[42730] 00:10:23.841 bw ( KiB/s): min=24576, max=24576, per=24.61%, avg=24576.00, stdev= 0.00, samples=1 00:10:23.841 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:23.841 lat (usec) : 1000=0.01% 00:10:23.841 lat (msec) : 2=0.49%, 4=2.37%, 10=79.15%, 20=14.02%, 50=3.95% 00:10:23.841 lat (msec) : 100=0.01% 00:10:23.841 cpu : usr=4.10%, sys=6.30%, ctx=679, majf=0, minf=1 00:10:23.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:23.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.841 issued rwts: total=6656,7001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.841 job1: (groupid=0, jobs=1): err= 0: pid=1824833: Wed Nov 20 16:53:15 2024 00:10:23.841 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:10:23.841 slat (nsec): min=878, max=20365k, avg=110718.72, stdev=780131.99 00:10:23.841 clat (usec): min=5746, max=72092, avg=12845.92, stdev=10262.98 00:10:23.841 lat (usec): min=5747, max=72101, avg=12956.64, stdev=10354.96 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7963], 00:10:23.841 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:10:23.841 | 70.00th=[10814], 80.00th=[12911], 90.00th=[24511], 95.00th=[34866], 00:10:23.841 | 99.00th=[63177], 99.50th=[63701], 99.90th=[66847], 99.95th=[66847], 00:10:23.841 | 99.99th=[71828] 00:10:23.841 write: IOPS=5396, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1007msec); 0 zone resets 00:10:23.841 slat (nsec): min=1491, max=13878k, avg=76201.67, stdev=427687.09 00:10:23.841 clat (usec): min=1276, max=72096, avg=11396.12, stdev=9443.40 00:10:23.841 lat (usec): min=1287, max=72103, avg=11472.32, stdev=9484.44 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 6980], 00:10:23.841 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[ 8455], 00:10:23.841 | 70.00th=[10552], 80.00th=[13829], 90.00th=[18744], 95.00th=[29230], 00:10:23.841 | 99.00th=[66847], 99.50th=[67634], 99.90th=[68682], 99.95th=[70779], 00:10:23.841 | 99.99th=[71828] 00:10:23.841 bw ( KiB/s): min=13776, max=28672, per=21.26%, avg=21224.00, stdev=10533.06, samples=2 00:10:23.841 iops : min= 3444, max= 7168, avg=5306.00, stdev=2633.27, samples=2 00:10:23.841 lat (msec) : 2=0.10%, 4=0.01%, 10=67.35%, 20=21.82%, 50=9.02% 00:10:23.841 lat (msec) : 100=1.70% 00:10:23.841 cpu : usr=2.98%, sys=2.98%, ctx=800, majf=0, minf=2 00:10:23.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:23.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.841 issued rwts: total=5120,5434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.841 job2: (groupid=0, jobs=1): err= 0: pid=1824856: Wed Nov 20 16:53:15 2024 00:10:23.841 read: IOPS=5936, BW=23.2MiB/s (24.3MB/s)(23.4MiB/1011msec) 00:10:23.841 slat (nsec): min=952, max=7954.1k, avg=78167.58, stdev=528534.62 00:10:23.841 clat (usec): min=3783, max=24095, avg=9671.87, stdev=2404.41 00:10:23.841 lat (usec): min=3794, max=24098, avg=9750.04, stdev=2450.81 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 5866], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8225], 00:10:23.841 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:10:23.841 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[12256], 95.00th=[14353], 00:10:23.841 | 99.00th=[20317], 99.50th=[21890], 99.90th=[22938], 99.95th=[23987], 00:10:23.841 | 99.99th=[23987] 00:10:23.841 write: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec); 0 zone resets 00:10:23.841 slat (nsec): min=1598, max=53505k, avg=81340.65, stdev=778278.09 00:10:23.841 clat (usec): min=2970, max=61886, avg=11249.04, stdev=8093.78 00:10:23.841 lat (usec): min=2979, max=61900, avg=11330.38, stdev=8133.12 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 4015], 5.00th=[ 5800], 10.00th=[ 7242], 20.00th=[ 7701], 00:10:23.841 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:10:23.841 | 70.00th=[10159], 80.00th=[14746], 90.00th=[16909], 95.00th=[21103], 00:10:23.841 | 99.00th=[60031], 99.50th=[60556], 99.90th=[61080], 99.95th=[61604], 00:10:23.841 | 99.99th=[62129] 00:10:23.841 bw ( KiB/s): min=22984, max=26168, per=24.61%, avg=24576.00, stdev=2251.43, samples=2 00:10:23.841 iops : min= 5746, max= 6542, avg=6144.00, stdev=562.86, samples=2 00:10:23.841 lat (msec) : 4=0.54%, 10=70.07%, 20=25.84%, 50=2.51%, 100=1.05% 00:10:23.841 cpu : usr=2.97%, sys=6.44%, ctx=687, majf=0, minf=1 00:10:23.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:23.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.841 issued rwts: total=6002,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.841 job3: (groupid=0, jobs=1): err= 0: pid=1824863: Wed Nov 20 16:53:15 2024 00:10:23.841 read: IOPS=6377, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1005msec) 00:10:23.841 slat (nsec): min=979, max=10660k, avg=79691.74, stdev=584420.20 00:10:23.841 clat (usec): min=3523, max=39199, avg=10121.72, stdev=3387.59 00:10:23.841 lat (usec): min=3528, max=39202, avg=10201.42, stdev=3435.69 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 4621], 5.00th=[ 6783], 10.00th=[ 7177], 20.00th=[ 7898], 00:10:23.841 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10028], 00:10:23.841 | 70.00th=[10683], 80.00th=[11863], 90.00th=[14353], 95.00th=[15270], 00:10:23.841 | 99.00th=[21627], 99.50th=[30016], 99.90th=[38536], 99.95th=[39060], 00:10:23.841 | 99.99th=[39060] 00:10:23.841 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:10:23.841 slat (nsec): min=1589, max=10774k, avg=66306.94, stdev=453528.61 00:10:23.841 clat (usec): min=1190, max=39194, avg=9406.83, stdev=5300.97 00:10:23.841 lat (usec): min=1226, max=39196, avg=9473.14, stdev=5335.73 00:10:23.841 clat percentiles (usec): 00:10:23.841 | 1.00th=[ 3228], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 5800], 00:10:23.841 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8291], 00:10:23.841 | 70.00th=[ 9110], 80.00th=[11076], 90.00th=[15401], 95.00th=[22414], 00:10:23.841 | 99.00th=[30016], 99.50th=[31589], 99.90th=[34866], 99.95th=[34866], 00:10:23.841 | 99.99th=[39060] 00:10:23.841 bw ( KiB/s): min=22512, max=30736, per=26.67%, avg=26624.00, stdev=5815.25, samples=2 00:10:23.841 iops : min= 5628, max= 7684, avg=6656.00, stdev=1453.81, samples=2 00:10:23.841 lat (msec) : 2=0.07%, 4=1.48%, 10=64.75%, 20=29.69%, 50=4.02% 00:10:23.841 cpu : usr=4.88%, sys=6.47%, ctx=502, majf=0, minf=2 00:10:23.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:23.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.841 issued rwts: total=6409,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.841 00:10:23.841 Run status group 0 (all jobs): 00:10:23.841 READ: bw=93.5MiB/s (98.0MB/s), 19.9MiB/s-26.0MiB/s (20.8MB/s-27.2MB/s), io=94.5MiB (99.1MB), run=1001-1011msec 00:10:23.841 WRITE: bw=97.5MiB/s (102MB/s), 21.1MiB/s-27.3MiB/s (22.1MB/s-28.6MB/s), io=98.6MiB (103MB), run=1001-1011msec 00:10:23.841 00:10:23.841 Disk stats (read/write): 00:10:23.841 nvme0n1: ios=5603/5632, merge=0/0, ticks=32781/25872, in_queue=58653, util=97.60% 00:10:23.841 nvme0n2: ios=4642/5079, merge=0/0, ticks=18961/17430, in_queue=36391, util=87.26% 00:10:23.841 nvme0n3: ios=5004/5120, merge=0/0, ticks=30870/33932, in_queue=64802, util=96.62% 00:10:23.842 nvme0n4: ios=5169/5209, merge=0/0, ticks=51789/50591, in_queue=102380, util=96.15% 00:10:23.842 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:23.842 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1824919 00:10:23.842 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:23.842 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:23.842 [global] 00:10:23.842 thread=1 00:10:23.842 invalidate=1 00:10:23.842 rw=read 00:10:23.842 time_based=1 00:10:23.842 runtime=10 00:10:23.842 ioengine=libaio 00:10:23.842 direct=1 00:10:23.842 bs=4096 00:10:23.842 iodepth=1 00:10:23.842 norandommap=1 00:10:23.842 numjobs=1 00:10:23.842 00:10:23.842 [job0] 00:10:23.842 filename=/dev/nvme0n1 00:10:23.842 [job1] 00:10:23.842 filename=/dev/nvme0n2 00:10:23.842 [job2] 00:10:23.842 filename=/dev/nvme0n3 00:10:23.842 [job3] 00:10:23.842 filename=/dev/nvme0n4 00:10:23.842 Could not set queue depth (nvme0n1) 00:10:23.842 Could not set queue depth (nvme0n2) 00:10:23.842 Could not set queue depth (nvme0n3) 00:10:23.842 Could not set queue depth (nvme0n4) 00:10:24.102 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.102 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.102 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.102 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.102 fio-3.35 00:10:24.102 Starting 4 threads 00:10:27.397 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:27.397 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=454656, buflen=4096 00:10:27.397 fio: pid=1825326, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:27.397 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6512640, buflen=4096 00:10:27.397 fio: pid=1825319, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:27.397 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11763712, buflen=4096 00:10:27.397 fio: pid=1825286, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:27.397 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2510848, buflen=4096 00:10:27.397 fio: pid=1825300, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.397 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:27.658 00:10:27.658 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825286: Wed Nov 20 16:53:19 2024 00:10:27.658 read: IOPS=968, BW=3875KiB/s (3968kB/s)(11.2MiB/2965msec) 00:10:27.658 slat (usec): min=6, max=27965, avg=56.21, stdev=827.44 00:10:27.658 clat (usec): min=263, max=3781, avg=960.34, stdev=95.71 00:10:27.658 lat (usec): min=291, max=29010, avg=1016.56, stdev=835.45 00:10:27.658 clat percentiles (usec): 00:10:27.658 | 1.00th=[ 594], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 930], 00:10:27.658 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:10:27.658 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1045], 00:10:27.658 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1188], 00:10:27.658 | 99.99th=[ 3785] 00:10:27.658 bw ( KiB/s): min= 3968, max= 4048, per=60.88%, avg=4008.00, stdev=28.84, samples=5 00:10:27.658 iops : min= 992, max= 1012, avg=1002.00, stdev= 7.21, samples=5 00:10:27.658 lat (usec) : 500=0.45%, 750=1.74%, 1000=72.47% 00:10:27.658 lat (msec) : 2=25.27%, 4=0.03% 00:10:27.658 cpu : usr=1.86%, sys=3.85%, ctx=2877, majf=0, minf=1 00:10:27.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 issued rwts: total=2873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.658 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825300: Wed Nov 20 16:53:19 2024 00:10:27.658 read: IOPS=194, BW=778KiB/s (797kB/s)(2452KiB/3151msec) 00:10:27.658 slat (usec): min=6, max=21475, avg=167.13, stdev=1530.55 00:10:27.658 clat (usec): min=407, max=42200, avg=4923.04, stdev=12147.61 00:10:27.658 lat (usec): min=434, max=42225, avg=5090.39, stdev=12198.43 00:10:27.658 clat percentiles (usec): 00:10:27.658 | 1.00th=[ 523], 5.00th=[ 652], 10.00th=[ 725], 20.00th=[ 816], 00:10:27.658 | 30.00th=[ 922], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1004], 00:10:27.658 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1205], 95.00th=[42206], 00:10:27.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.658 | 99.99th=[42206] 00:10:27.658 bw ( KiB/s): min= 88, max= 3534, per=10.16%, avg=669.00, stdev=1403.57, samples=6 00:10:27.658 iops : min= 22, max= 883, avg=167.17, stdev=350.69, samples=6 00:10:27.658 lat (usec) : 500=0.65%, 750=14.01%, 1000=45.93% 00:10:27.658 lat (msec) : 2=29.48%, 50=9.77% 00:10:27.658 cpu : usr=0.38%, sys=0.70%, ctx=621, majf=0, minf=2 00:10:27.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.658 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825319: Wed Nov 20 16:53:19 2024 00:10:27.658 read: IOPS=565, BW=2263KiB/s (2317kB/s)(6360KiB/2811msec) 00:10:27.658 slat (usec): min=5, max=14846, avg=41.69, stdev=446.44 00:10:27.658 clat (usec): min=266, max=42041, avg=1706.39, stdev=5673.62 00:10:27.658 lat (usec): min=273, max=42068, avg=1748.09, stdev=5689.74 00:10:27.658 clat percentiles (usec): 00:10:27.658 | 1.00th=[ 510], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 742], 00:10:27.658 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 979], 00:10:27.658 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:10:27.658 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:27.658 | 99.99th=[42206] 00:10:27.658 bw ( KiB/s): min= 832, max= 4880, per=36.52%, avg=2404.80, stdev=1730.33, samples=5 00:10:27.658 iops : min= 208, max= 1220, avg=601.20, stdev=432.58, samples=5 00:10:27.658 lat (usec) : 500=0.75%, 750=20.36%, 1000=41.11% 00:10:27.658 lat (msec) : 2=35.70%, 50=2.01% 00:10:27.658 cpu : usr=0.32%, sys=2.88%, ctx=1593, majf=0, minf=2 00:10:27.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.658 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825326: Wed Nov 20 16:53:19 2024 00:10:27.658 read: IOPS=42, BW=170KiB/s (174kB/s)(444KiB/2616msec) 00:10:27.658 slat (nsec): min=6154, max=40717, avg=24033.73, stdev=6646.88 00:10:27.658 clat (usec): min=212, max=42272, avg=23337.70, stdev=20559.92 00:10:27.658 lat (usec): min=219, max=42298, avg=23361.72, stdev=20561.81 00:10:27.658 clat percentiles (usec): 00:10:27.658 | 1.00th=[ 258], 5.00th=[ 519], 10.00th=[ 644], 20.00th=[ 758], 00:10:27.658 | 30.00th=[ 832], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41681], 00:10:27.658 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:27.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.658 | 99.99th=[42206] 00:10:27.658 bw ( KiB/s): min= 88, max= 464, per=2.61%, avg=172.80, stdev=162.93, samples=5 00:10:27.658 iops : min= 22, max= 116, avg=43.20, stdev=40.73, samples=5 00:10:27.658 lat (usec) : 250=0.89%, 500=2.68%, 750=15.18%, 1000=25.89% 00:10:27.658 lat (msec) : 50=54.46% 00:10:27.658 cpu : usr=0.00%, sys=0.19%, ctx=112, majf=0, minf=2 00:10:27.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.658 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.658 00:10:27.658 Run status group 0 (all jobs): 00:10:27.658 READ: bw=6583KiB/s (6741kB/s), 170KiB/s-3875KiB/s (174kB/s-3968kB/s), io=20.3MiB (21.2MB), run=2616-3151msec 00:10:27.658 00:10:27.658 Disk stats (read/write): 00:10:27.658 nvme0n1: ios=2770/0, merge=0/0, ticks=2651/0, in_queue=2651, util=92.05% 00:10:27.658 nvme0n2: ios=576/0, merge=0/0, ticks=2954/0, in_queue=2954, util=93.15% 00:10:27.658 nvme0n3: ios=1526/0, merge=0/0, ticks=2394/0, in_queue=2394, util=96.03% 00:10:27.658 nvme0n4: ios=110/0, merge=0/0, ticks=2550/0, in_queue=2550, util=96.46% 00:10:27.658 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.658 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:27.918 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.918 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:28.178 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.178 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:28.178 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.178 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1824919 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:28.438 nvmf hotplug test: fio failed as expected 00:10:28.438 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.697 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:28.697 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:28.697 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.698 rmmod nvme_tcp 00:10:28.698 rmmod nvme_fabrics 00:10:28.698 rmmod nvme_keyring 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1821382 ']' 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1821382 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1821382 ']' 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1821382 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.698 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821382 00:10:28.957 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.957 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.957 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821382' 00:10:28.957 killing process with pid 1821382 00:10:28.957 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1821382 00:10:28.957 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1821382 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.957 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.501 00:10:31.501 real 0m29.430s 00:10:31.501 user 2m37.728s 00:10:31.501 sys 0m9.214s 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.501 ************************************ 00:10:31.501 END TEST nvmf_fio_target 00:10:31.501 ************************************ 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.501 ************************************ 00:10:31.501 START TEST nvmf_bdevio 00:10:31.501 ************************************ 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:31.501 * Looking for test storage... 00:10:31.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.501 --rc genhtml_branch_coverage=1 00:10:31.501 --rc genhtml_function_coverage=1 00:10:31.501 --rc genhtml_legend=1 00:10:31.501 --rc geninfo_all_blocks=1 00:10:31.501 --rc geninfo_unexecuted_blocks=1 00:10:31.501 00:10:31.501 ' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.501 --rc genhtml_branch_coverage=1 00:10:31.501 --rc genhtml_function_coverage=1 00:10:31.501 --rc genhtml_legend=1 00:10:31.501 --rc geninfo_all_blocks=1 00:10:31.501 --rc geninfo_unexecuted_blocks=1 00:10:31.501 00:10:31.501 ' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.501 --rc genhtml_branch_coverage=1 00:10:31.501 --rc genhtml_function_coverage=1 00:10:31.501 --rc genhtml_legend=1 00:10:31.501 --rc geninfo_all_blocks=1 00:10:31.501 --rc geninfo_unexecuted_blocks=1 00:10:31.501 00:10:31.501 ' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.501 --rc genhtml_branch_coverage=1 00:10:31.501 --rc genhtml_function_coverage=1 00:10:31.501 --rc genhtml_legend=1 00:10:31.501 --rc geninfo_all_blocks=1 00:10:31.501 --rc geninfo_unexecuted_blocks=1 00:10:31.501 00:10:31.501 ' 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.501 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.502 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.637 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:39.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:39.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:39.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:39.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:10:39.638 00:10:39.638 --- 10.0.0.2 ping statistics --- 00:10:39.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.638 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:10:39.638 00:10:39.638 --- 10.0.0.1 ping statistics --- 00:10:39.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.638 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1830461 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1830461 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1830461 ']' 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.638 16:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.638 [2024-11-20 16:53:30.981566] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:10:39.638 [2024-11-20 16:53:30.981630] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.638 [2024-11-20 16:53:31.083170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.638 [2024-11-20 16:53:31.135640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.638 [2024-11-20 16:53:31.135691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.638 [2024-11-20 16:53:31.135700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.638 [2024-11-20 16:53:31.135707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.638 [2024-11-20 16:53:31.135714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.638 [2024-11-20 16:53:31.138085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:39.638 [2024-11-20 16:53:31.138228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:39.638 [2024-11-20 16:53:31.138386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:39.638 [2024-11-20 16:53:31.138387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.899 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 [2024-11-20 16:53:31.863798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 Malloc0 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 [2024-11-20 16:53:31.941701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.900 { 00:10:39.900 "params": { 00:10:39.900 "name": "Nvme$subsystem", 00:10:39.900 "trtype": "$TEST_TRANSPORT", 00:10:39.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.900 "adrfam": "ipv4", 00:10:39.900 "trsvcid": "$NVMF_PORT", 00:10:39.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.900 "hdgst": ${hdgst:-false}, 00:10:39.900 "ddgst": ${ddgst:-false} 00:10:39.900 }, 00:10:39.900 "method": "bdev_nvme_attach_controller" 00:10:39.900 } 00:10:39.900 EOF 00:10:39.900 )") 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:39.900 16:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.900 "params": { 00:10:39.900 "name": "Nvme1", 00:10:39.900 "trtype": "tcp", 00:10:39.900 "traddr": "10.0.0.2", 00:10:39.900 "adrfam": "ipv4", 00:10:39.900 "trsvcid": "4420", 00:10:39.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.900 "hdgst": false, 00:10:39.900 "ddgst": false 00:10:39.900 }, 00:10:39.900 "method": "bdev_nvme_attach_controller" 00:10:39.900 }' 00:10:39.900 [2024-11-20 16:53:32.000787] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:10:39.900 [2024-11-20 16:53:32.000850] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830713 ] 00:10:40.160 [2024-11-20 16:53:32.094459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.160 [2024-11-20 16:53:32.151008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.160 [2024-11-20 16:53:32.151187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.160 [2024-11-20 16:53:32.151191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.420 I/O targets: 00:10:40.420 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:40.420 00:10:40.420 00:10:40.420 CUnit - A unit testing framework for C - Version 2.1-3 00:10:40.420 http://cunit.sourceforge.net/ 00:10:40.420 00:10:40.420 00:10:40.420 Suite: bdevio tests on: Nvme1n1 00:10:40.420 Test: blockdev write read block ...passed 00:10:40.420 Test: blockdev write zeroes read block ...passed 00:10:40.420 Test: blockdev write zeroes read no split ...passed 00:10:40.680 Test: blockdev write zeroes read split ...passed 00:10:40.680 Test: blockdev write zeroes read split partial ...passed 00:10:40.680 Test: blockdev reset ...[2024-11-20 16:53:32.617919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:40.680 [2024-11-20 16:53:32.618022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54c970 (9): Bad file descriptor 00:10:40.680 [2024-11-20 16:53:32.632253] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:40.680 passed 00:10:40.680 Test: blockdev write read 8 blocks ...passed 00:10:40.680 Test: blockdev write read size > 128k ...passed 00:10:40.680 Test: blockdev write read invalid size ...passed 00:10:40.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:40.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:40.680 Test: blockdev write read max offset ...passed 00:10:40.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:40.680 Test: blockdev writev readv 8 blocks ...passed 00:10:40.680 Test: blockdev writev readv 30 x 1block ...passed 00:10:40.680 Test: blockdev writev readv block ...passed 00:10:40.680 Test: blockdev writev readv size > 128k ...passed 00:10:40.940 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:40.940 Test: blockdev comparev and writev ...[2024-11-20 16:53:32.856817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.856866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.856883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.856891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.857443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.857456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.857471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.857479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.858015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.858027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.858040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.858048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.858620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.940 [2024-11-20 16:53:32.858631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:40.940 [2024-11-20 16:53:32.858644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.941 [2024-11-20 16:53:32.858652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:40.941 passed 00:10:40.941 Test: blockdev nvme passthru rw ...passed 00:10:40.941 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:53:32.942806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.941 [2024-11-20 16:53:32.942821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:40.941 [2024-11-20 16:53:32.943242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.941 [2024-11-20 16:53:32.943254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:40.941 [2024-11-20 16:53:32.943537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.941 [2024-11-20 16:53:32.943548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:40.941 [2024-11-20 16:53:32.943914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.941 [2024-11-20 16:53:32.943926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:40.941 passed 00:10:40.941 Test: blockdev nvme admin passthru ...passed 00:10:40.941 Test: blockdev copy ...passed 00:10:40.941 00:10:40.941 Run Summary: Type Total Ran Passed Failed Inactive 00:10:40.941 suites 1 1 n/a 0 0 00:10:40.941 tests 23 23 23 0 0 00:10:40.941 asserts 152 152 152 0 n/a 00:10:40.941 00:10:40.941 Elapsed time = 1.040 seconds 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.201 rmmod nvme_tcp 00:10:41.201 rmmod nvme_fabrics 00:10:41.201 rmmod nvme_keyring 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1830461 ']' 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1830461 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1830461 ']' 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1830461 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830461 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830461' 00:10:41.201 killing process with pid 1830461 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1830461 00:10:41.201 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1830461 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.462 16:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.373 16:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.373 00:10:43.373 real 0m12.341s 00:10:43.373 user 0m13.845s 00:10:43.373 sys 0m6.274s 00:10:43.373 16:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.373 16:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.373 ************************************ 00:10:43.373 END TEST nvmf_bdevio 00:10:43.373 ************************************ 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:43.634 00:10:43.634 real 5m5.866s 00:10:43.634 user 11m58.104s 00:10:43.634 sys 1m52.341s 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 END TEST nvmf_target_core 00:10:43.634 ************************************ 00:10:43.634 16:53:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:43.634 16:53:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.634 16:53:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.634 16:53:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 START TEST nvmf_target_extra 00:10:43.634 ************************************ 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:43.634 * Looking for test storage... 00:10:43.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.634 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.896 --rc genhtml_branch_coverage=1 00:10:43.896 --rc genhtml_function_coverage=1 00:10:43.896 --rc genhtml_legend=1 00:10:43.896 --rc geninfo_all_blocks=1 00:10:43.896 --rc geninfo_unexecuted_blocks=1 00:10:43.896 00:10:43.896 ' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.896 --rc genhtml_branch_coverage=1 00:10:43.896 --rc genhtml_function_coverage=1 00:10:43.896 --rc genhtml_legend=1 00:10:43.896 --rc geninfo_all_blocks=1 00:10:43.896 --rc geninfo_unexecuted_blocks=1 00:10:43.896 00:10:43.896 ' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.896 --rc genhtml_branch_coverage=1 00:10:43.896 --rc genhtml_function_coverage=1 00:10:43.896 --rc genhtml_legend=1 00:10:43.896 --rc geninfo_all_blocks=1 00:10:43.896 --rc geninfo_unexecuted_blocks=1 00:10:43.896 00:10:43.896 ' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.896 --rc genhtml_branch_coverage=1 00:10:43.896 --rc genhtml_function_coverage=1 00:10:43.896 --rc genhtml_legend=1 00:10:43.896 --rc geninfo_all_blocks=1 00:10:43.896 --rc geninfo_unexecuted_blocks=1 00:10:43.896 00:10:43.896 ' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.896 16:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.897 ************************************ 00:10:43.897 START TEST nvmf_example 00:10:43.897 ************************************ 00:10:43.897 16:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:43.897 * Looking for test storage... 00:10:43.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.897 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.897 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.897 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.159 --rc genhtml_branch_coverage=1 00:10:44.159 --rc genhtml_function_coverage=1 00:10:44.159 --rc genhtml_legend=1 00:10:44.159 --rc geninfo_all_blocks=1 00:10:44.159 --rc geninfo_unexecuted_blocks=1 00:10:44.159 00:10:44.159 ' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.159 --rc genhtml_branch_coverage=1 00:10:44.159 --rc genhtml_function_coverage=1 00:10:44.159 --rc genhtml_legend=1 00:10:44.159 --rc geninfo_all_blocks=1 00:10:44.159 --rc geninfo_unexecuted_blocks=1 00:10:44.159 00:10:44.159 ' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.159 --rc genhtml_branch_coverage=1 00:10:44.159 --rc genhtml_function_coverage=1 00:10:44.159 --rc genhtml_legend=1 00:10:44.159 --rc geninfo_all_blocks=1 00:10:44.159 --rc geninfo_unexecuted_blocks=1 00:10:44.159 00:10:44.159 ' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.159 --rc genhtml_branch_coverage=1 00:10:44.159 --rc genhtml_function_coverage=1 00:10:44.159 --rc genhtml_legend=1 00:10:44.159 --rc geninfo_all_blocks=1 00:10:44.159 --rc geninfo_unexecuted_blocks=1 00:10:44.159 00:10:44.159 ' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.159 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.160 16:53:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.292 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:52.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:52.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:52.293 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:52.293 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:10:52.293 00:10:52.293 --- 10.0.0.2 ping statistics --- 00:10:52.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.293 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:10:52.293 00:10:52.293 --- 10.0.0.1 ping statistics --- 00:10:52.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.293 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1835230 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1835230 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1835230 ']' 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.293 16:53:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:52.554 16:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:04.781 Initializing NVMe Controllers 00:11:04.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:04.781 Initialization complete. Launching workers. 00:11:04.781 ======================================================== 00:11:04.781 Latency(us) 00:11:04.781 Device Information : IOPS MiB/s Average min max 00:11:04.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18180.45 71.02 3520.10 619.07 17347.09 00:11:04.781 ======================================================== 00:11:04.781 Total : 18180.45 71.02 3520.10 619.07 17347.09 00:11:04.781 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.781 rmmod nvme_tcp 00:11:04.781 rmmod nvme_fabrics 00:11:04.781 rmmod nvme_keyring 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1835230 ']' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1835230 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1835230 ']' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1835230 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1835230 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1835230' 00:11:04.781 killing process with pid 1835230 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1835230 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1835230 00:11:04.781 nvmf threads initialize successfully 00:11:04.781 bdev subsystem init successfully 00:11:04.781 created a nvmf target service 00:11:04.781 create targets's poll groups done 00:11:04.781 all subsystems of target started 00:11:04.781 nvmf target is running 00:11:04.781 all subsystems of target stopped 00:11:04.781 destroy targets's poll groups done 00:11:04.781 destroyed the nvmf target service 00:11:04.781 bdev subsystem finish successfully 00:11:04.781 nvmf threads destroy successfully 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.781 16:53:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.352 00:11:05.352 real 0m21.494s 00:11:05.352 user 0m47.053s 00:11:05.352 sys 0m7.033s 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.352 ************************************ 00:11:05.352 END TEST nvmf_example 00:11:05.352 ************************************ 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.352 ************************************ 00:11:05.352 START TEST nvmf_filesystem 00:11:05.352 ************************************ 00:11:05.352 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:05.614 * Looking for test storage... 00:11:05.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.614 --rc genhtml_branch_coverage=1 00:11:05.614 --rc genhtml_function_coverage=1 00:11:05.614 --rc genhtml_legend=1 00:11:05.614 --rc geninfo_all_blocks=1 00:11:05.614 --rc geninfo_unexecuted_blocks=1 00:11:05.614 00:11:05.614 ' 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.614 --rc genhtml_branch_coverage=1 00:11:05.614 --rc genhtml_function_coverage=1 00:11:05.614 --rc genhtml_legend=1 00:11:05.614 --rc geninfo_all_blocks=1 00:11:05.614 --rc geninfo_unexecuted_blocks=1 00:11:05.614 00:11:05.614 ' 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.614 --rc genhtml_branch_coverage=1 00:11:05.614 --rc genhtml_function_coverage=1 00:11:05.614 --rc genhtml_legend=1 00:11:05.614 --rc geninfo_all_blocks=1 00:11:05.614 --rc geninfo_unexecuted_blocks=1 00:11:05.614 00:11:05.614 ' 00:11:05.614 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.614 --rc genhtml_branch_coverage=1 00:11:05.614 --rc genhtml_function_coverage=1 00:11:05.614 --rc genhtml_legend=1 00:11:05.615 --rc geninfo_all_blocks=1 00:11:05.615 --rc geninfo_unexecuted_blocks=1 00:11:05.615 00:11:05.615 ' 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:05.615 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:05.616 #define SPDK_CONFIG_H 00:11:05.616 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:05.616 #define SPDK_CONFIG_APPS 1 00:11:05.616 #define SPDK_CONFIG_ARCH native 00:11:05.616 #undef SPDK_CONFIG_ASAN 00:11:05.616 #undef SPDK_CONFIG_AVAHI 00:11:05.616 #undef SPDK_CONFIG_CET 00:11:05.616 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:05.616 #define SPDK_CONFIG_COVERAGE 1 00:11:05.616 #define SPDK_CONFIG_CROSS_PREFIX 00:11:05.616 #undef SPDK_CONFIG_CRYPTO 00:11:05.616 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:05.616 #undef SPDK_CONFIG_CUSTOMOCF 00:11:05.616 #undef SPDK_CONFIG_DAOS 00:11:05.616 #define SPDK_CONFIG_DAOS_DIR 00:11:05.616 #define SPDK_CONFIG_DEBUG 1 00:11:05.616 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:05.616 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:05.616 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:05.616 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:05.616 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:05.616 #undef SPDK_CONFIG_DPDK_UADK 00:11:05.616 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:05.616 #define SPDK_CONFIG_EXAMPLES 1 00:11:05.616 #undef SPDK_CONFIG_FC 00:11:05.616 #define SPDK_CONFIG_FC_PATH 00:11:05.616 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:05.616 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:05.616 #define SPDK_CONFIG_FSDEV 1 00:11:05.616 #undef SPDK_CONFIG_FUSE 00:11:05.616 #undef SPDK_CONFIG_FUZZER 00:11:05.616 #define SPDK_CONFIG_FUZZER_LIB 00:11:05.616 #undef SPDK_CONFIG_GOLANG 00:11:05.616 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:05.616 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:05.616 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:05.616 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:05.616 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:05.616 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:05.616 #undef SPDK_CONFIG_HAVE_LZ4 00:11:05.616 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:05.616 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:05.616 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:05.616 #define SPDK_CONFIG_IDXD 1 00:11:05.616 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:05.616 #undef SPDK_CONFIG_IPSEC_MB 00:11:05.616 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:05.616 #define SPDK_CONFIG_ISAL 1 00:11:05.616 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:05.616 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:05.616 #define SPDK_CONFIG_LIBDIR 00:11:05.616 #undef SPDK_CONFIG_LTO 00:11:05.616 #define SPDK_CONFIG_MAX_LCORES 128 00:11:05.616 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:05.616 #define SPDK_CONFIG_NVME_CUSE 1 00:11:05.616 #undef SPDK_CONFIG_OCF 00:11:05.616 #define SPDK_CONFIG_OCF_PATH 00:11:05.616 #define SPDK_CONFIG_OPENSSL_PATH 00:11:05.616 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:05.616 #define SPDK_CONFIG_PGO_DIR 00:11:05.616 #undef SPDK_CONFIG_PGO_USE 00:11:05.616 #define SPDK_CONFIG_PREFIX /usr/local 00:11:05.616 #undef SPDK_CONFIG_RAID5F 00:11:05.616 #undef SPDK_CONFIG_RBD 00:11:05.616 #define SPDK_CONFIG_RDMA 1 00:11:05.616 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:05.616 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:05.616 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:05.616 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:05.616 #define SPDK_CONFIG_SHARED 1 00:11:05.616 #undef SPDK_CONFIG_SMA 00:11:05.616 #define SPDK_CONFIG_TESTS 1 00:11:05.616 #undef SPDK_CONFIG_TSAN 00:11:05.616 #define SPDK_CONFIG_UBLK 1 00:11:05.616 #define SPDK_CONFIG_UBSAN 1 00:11:05.616 #undef SPDK_CONFIG_UNIT_TESTS 00:11:05.616 #undef SPDK_CONFIG_URING 00:11:05.616 #define SPDK_CONFIG_URING_PATH 00:11:05.616 #undef SPDK_CONFIG_URING_ZNS 00:11:05.616 #undef SPDK_CONFIG_USDT 00:11:05.616 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:05.616 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:05.616 #define SPDK_CONFIG_VFIO_USER 1 00:11:05.616 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:05.616 #define SPDK_CONFIG_VHOST 1 00:11:05.616 #define SPDK_CONFIG_VIRTIO 1 00:11:05.616 #undef SPDK_CONFIG_VTUNE 00:11:05.616 #define SPDK_CONFIG_VTUNE_DIR 00:11:05.616 #define SPDK_CONFIG_WERROR 1 00:11:05.616 #define SPDK_CONFIG_WPDK_DIR 00:11:05.616 #undef SPDK_CONFIG_XNVME 00:11:05.616 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:05.616 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:05.880 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:05.881 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1838072 ]] 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1838072 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:05.882 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.qdzugn 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qdzugn/tests/target /tmp/spdk.qdzugn 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118316453888 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11040055296 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677662720 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=593920 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:05.883 * Looking for test storage... 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118316453888 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13254647808 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.883 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.884 --rc genhtml_branch_coverage=1 00:11:05.884 --rc genhtml_function_coverage=1 00:11:05.884 --rc genhtml_legend=1 00:11:05.884 --rc geninfo_all_blocks=1 00:11:05.884 --rc geninfo_unexecuted_blocks=1 00:11:05.884 00:11:05.884 ' 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.884 --rc genhtml_branch_coverage=1 00:11:05.884 --rc genhtml_function_coverage=1 00:11:05.884 --rc genhtml_legend=1 00:11:05.884 --rc geninfo_all_blocks=1 00:11:05.884 --rc geninfo_unexecuted_blocks=1 00:11:05.884 00:11:05.884 ' 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.884 --rc genhtml_branch_coverage=1 00:11:05.884 --rc genhtml_function_coverage=1 00:11:05.884 --rc genhtml_legend=1 00:11:05.884 --rc geninfo_all_blocks=1 00:11:05.884 --rc geninfo_unexecuted_blocks=1 00:11:05.884 00:11:05.884 ' 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.884 --rc genhtml_branch_coverage=1 00:11:05.884 --rc genhtml_function_coverage=1 00:11:05.884 --rc genhtml_legend=1 00:11:05.884 --rc geninfo_all_blocks=1 00:11:05.884 --rc geninfo_unexecuted_blocks=1 00:11:05.884 00:11:05.884 ' 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.884 16:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.884 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.885 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.885 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.885 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.885 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.885 16:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:14.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:14.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:14.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.023 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:14.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:11:14.024 00:11:14.024 --- 10.0.0.2 ping statistics --- 00:11:14.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.024 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:11:14.024 00:11:14.024 --- 10.0.0.1 ping statistics --- 00:11:14.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.024 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.024 ************************************ 00:11:14.024 START TEST nvmf_filesystem_no_in_capsule 00:11:14.024 ************************************ 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1842079 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1842079 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1842079 ']' 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.024 16:54:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.024 [2024-11-20 16:54:05.713410] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:11:14.024 [2024-11-20 16:54:05.713470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.024 [2024-11-20 16:54:05.814070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.024 [2024-11-20 16:54:05.867119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.024 [2024-11-20 16:54:05.867175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.024 [2024-11-20 16:54:05.867185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.024 [2024-11-20 16:54:05.867192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.024 [2024-11-20 16:54:05.867199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.024 [2024-11-20 16:54:05.869605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.024 [2024-11-20 16:54:05.869766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.024 [2024-11-20 16:54:05.869928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.024 [2024-11-20 16:54:05.869928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.595 [2024-11-20 16:54:06.591015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.595 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.596 Malloc1 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.596 [2024-11-20 16:54:06.761410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:14.596 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:14.856 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:14.856 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:14.856 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:14.856 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:14.856 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.856 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:14.857 { 00:11:14.857 "name": "Malloc1", 00:11:14.857 "aliases": [ 00:11:14.857 "a4ed0462-eb82-49de-ae25-1d8e4b1c81dc" 00:11:14.857 ], 00:11:14.857 "product_name": "Malloc disk", 00:11:14.857 "block_size": 512, 00:11:14.857 "num_blocks": 1048576, 00:11:14.857 "uuid": "a4ed0462-eb82-49de-ae25-1d8e4b1c81dc", 00:11:14.857 "assigned_rate_limits": { 00:11:14.857 "rw_ios_per_sec": 0, 00:11:14.857 "rw_mbytes_per_sec": 0, 00:11:14.857 "r_mbytes_per_sec": 0, 00:11:14.857 "w_mbytes_per_sec": 0 00:11:14.857 }, 00:11:14.857 "claimed": true, 00:11:14.857 "claim_type": "exclusive_write", 00:11:14.857 "zoned": false, 00:11:14.857 "supported_io_types": { 00:11:14.857 "read": true, 00:11:14.857 "write": true, 00:11:14.857 "unmap": true, 00:11:14.857 "flush": true, 00:11:14.857 "reset": true, 00:11:14.857 "nvme_admin": false, 00:11:14.857 "nvme_io": false, 00:11:14.857 "nvme_io_md": false, 00:11:14.857 "write_zeroes": true, 00:11:14.857 "zcopy": true, 00:11:14.857 "get_zone_info": false, 00:11:14.857 "zone_management": false, 00:11:14.857 "zone_append": false, 00:11:14.857 "compare": false, 00:11:14.857 "compare_and_write": false, 00:11:14.857 "abort": true, 00:11:14.857 "seek_hole": false, 00:11:14.857 "seek_data": false, 00:11:14.857 "copy": true, 00:11:14.857 "nvme_iov_md": false 00:11:14.857 }, 00:11:14.857 "memory_domains": [ 00:11:14.857 { 00:11:14.857 "dma_device_id": "system", 00:11:14.857 "dma_device_type": 1 00:11:14.857 }, 00:11:14.857 { 00:11:14.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.857 "dma_device_type": 2 00:11:14.857 } 00:11:14.857 ], 00:11:14.857 "driver_specific": {} 00:11:14.857 } 00:11:14.857 ]' 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:14.857 16:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.768 16:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.768 16:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.768 16:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.768 16:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.768 16:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.680 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.680 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.680 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.680 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:18.681 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:18.941 16:54:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:18.941 16:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 ************************************ 00:11:20.325 START TEST filesystem_ext4 00:11:20.325 ************************************ 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.325 mke2fs 1.47.0 (5-Feb-2023) 00:11:20.325 Discarding device blocks: 0/522240 done 00:11:20.325 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:20.325 Filesystem UUID: 5a40902b-d38c-4836-983a-d7877e4a58f6 00:11:20.325 Superblock backups stored on blocks: 00:11:20.325 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:20.325 00:11:20.325 Allocating group tables: 0/64 done 00:11:20.325 Writing inode tables: 0/64 done 00:11:20.325 Creating journal (8192 blocks): done 00:11:20.325 Writing superblocks and filesystem accounting information: 0/64 done 00:11:20.325 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:20.325 16:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1842079 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.903 00:11:26.903 real 0m6.460s 00:11:26.903 user 0m0.029s 00:11:26.903 sys 0m0.081s 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:26.903 ************************************ 00:11:26.903 END TEST filesystem_ext4 00:11:26.903 ************************************ 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.903 ************************************ 00:11:26.903 START TEST filesystem_btrfs 00:11:26.903 ************************************ 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:26.903 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:26.903 btrfs-progs v6.8.1 00:11:26.903 See https://btrfs.readthedocs.io for more information. 00:11:26.903 00:11:26.903 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:26.903 NOTE: several default settings have changed in version 5.15, please make sure 00:11:26.904 this does not affect your deployments: 00:11:26.904 - DUP for metadata (-m dup) 00:11:26.904 - enabled no-holes (-O no-holes) 00:11:26.904 - enabled free-space-tree (-R free-space-tree) 00:11:26.904 00:11:26.904 Label: (null) 00:11:26.904 UUID: 3069e1d5-abef-4fd8-b78b-e005ada6e8b2 00:11:26.904 Node size: 16384 00:11:26.904 Sector size: 4096 (CPU page size: 4096) 00:11:26.904 Filesystem size: 510.00MiB 00:11:26.904 Block group profiles: 00:11:26.904 Data: single 8.00MiB 00:11:26.904 Metadata: DUP 32.00MiB 00:11:26.904 System: DUP 8.00MiB 00:11:26.904 SSD detected: yes 00:11:26.904 Zoned device: no 00:11:26.904 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:26.904 Checksum: crc32c 00:11:26.904 Number of devices: 1 00:11:26.904 Devices: 00:11:26.904 ID SIZE PATH 00:11:26.904 1 510.00MiB /dev/nvme0n1p1 00:11:26.904 00:11:26.904 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:26.904 16:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.164 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.164 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.164 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.164 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.164 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.164 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.424 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1842079 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.425 00:11:27.425 real 0m0.708s 00:11:27.425 user 0m0.029s 00:11:27.425 sys 0m0.116s 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.425 ************************************ 00:11:27.425 END TEST filesystem_btrfs 00:11:27.425 ************************************ 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.425 ************************************ 00:11:27.425 START TEST filesystem_xfs 00:11:27.425 ************************************ 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:27.425 16:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:27.425 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:27.425 = sectsz=512 attr=2, projid32bit=1 00:11:27.425 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:27.425 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:27.425 data = bsize=4096 blocks=130560, imaxpct=25 00:11:27.425 = sunit=0 swidth=0 blks 00:11:27.425 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:27.425 log =internal log bsize=4096 blocks=16384, version=2 00:11:27.425 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:27.425 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.367 Discarding blocks...Done. 00:11:28.367 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:28.367 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1842079 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.910 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.186 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.186 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.186 00:11:31.186 real 0m3.649s 00:11:31.186 user 0m0.026s 00:11:31.186 sys 0m0.080s 00:11:31.186 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.186 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.186 ************************************ 00:11:31.186 END TEST filesystem_xfs 00:11:31.186 ************************************ 00:11:31.186 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:31.500 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:31.808 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.808 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.808 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.808 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.808 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.102 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.102 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.102 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:32.102 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.102 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.102 16:54:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1842079 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1842079 ']' 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1842079 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1842079 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1842079' 00:11:32.102 killing process with pid 1842079 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1842079 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1842079 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:32.102 00:11:32.102 real 0m18.622s 00:11:32.102 user 1m13.497s 00:11:32.102 sys 0m1.481s 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.102 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.397 ************************************ 00:11:32.397 END TEST nvmf_filesystem_no_in_capsule 00:11:32.397 ************************************ 00:11:32.397 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:32.397 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.397 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.397 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.397 ************************************ 00:11:32.397 START TEST nvmf_filesystem_in_capsule 00:11:32.397 ************************************ 00:11:32.397 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1846467 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1846467 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1846467 ']' 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.398 16:54:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.398 [2024-11-20 16:54:24.415133] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:11:32.398 [2024-11-20 16:54:24.415193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.398 [2024-11-20 16:54:24.503023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.398 [2024-11-20 16:54:24.533068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.398 [2024-11-20 16:54:24.533095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.398 [2024-11-20 16:54:24.533101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.398 [2024-11-20 16:54:24.533106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.398 [2024-11-20 16:54:24.533110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.398 [2024-11-20 16:54:24.534232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.398 [2024-11-20 16:54:24.534373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.398 [2024-11-20 16:54:24.534524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.398 [2024-11-20 16:54:24.534526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.341 [2024-11-20 16:54:25.252462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.341 Malloc1 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.341 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.342 [2024-11-20 16:54:25.405003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:33.342 { 00:11:33.342 "name": "Malloc1", 00:11:33.342 "aliases": [ 00:11:33.342 "0cb2ff1d-5309-4b96-9a00-d465541d8f77" 00:11:33.342 ], 00:11:33.342 "product_name": "Malloc disk", 00:11:33.342 "block_size": 512, 00:11:33.342 "num_blocks": 1048576, 00:11:33.342 "uuid": "0cb2ff1d-5309-4b96-9a00-d465541d8f77", 00:11:33.342 "assigned_rate_limits": { 00:11:33.342 "rw_ios_per_sec": 0, 00:11:33.342 "rw_mbytes_per_sec": 0, 00:11:33.342 "r_mbytes_per_sec": 0, 00:11:33.342 "w_mbytes_per_sec": 0 00:11:33.342 }, 00:11:33.342 "claimed": true, 00:11:33.342 "claim_type": "exclusive_write", 00:11:33.342 "zoned": false, 00:11:33.342 "supported_io_types": { 00:11:33.342 "read": true, 00:11:33.342 "write": true, 00:11:33.342 "unmap": true, 00:11:33.342 "flush": true, 00:11:33.342 "reset": true, 00:11:33.342 "nvme_admin": false, 00:11:33.342 "nvme_io": false, 00:11:33.342 "nvme_io_md": false, 00:11:33.342 "write_zeroes": true, 00:11:33.342 "zcopy": true, 00:11:33.342 "get_zone_info": false, 00:11:33.342 "zone_management": false, 00:11:33.342 "zone_append": false, 00:11:33.342 "compare": false, 00:11:33.342 "compare_and_write": false, 00:11:33.342 "abort": true, 00:11:33.342 "seek_hole": false, 00:11:33.342 "seek_data": false, 00:11:33.342 "copy": true, 00:11:33.342 "nvme_iov_md": false 00:11:33.342 }, 00:11:33.342 "memory_domains": [ 00:11:33.342 { 00:11:33.342 "dma_device_id": "system", 00:11:33.342 "dma_device_type": 1 00:11:33.342 }, 00:11:33.342 { 00:11:33.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.342 "dma_device_type": 2 00:11:33.342 } 00:11:33.342 ], 00:11:33.342 "driver_specific": {} 00:11:33.342 } 00:11:33.342 ]' 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:33.342 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:33.601 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:33.601 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:33.601 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:33.601 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:33.601 16:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.985 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.985 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.985 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.985 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.985 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:36.894 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:37.487 16:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:38.058 16:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.003 ************************************ 00:11:39.003 START TEST filesystem_in_capsule_ext4 00:11:39.003 ************************************ 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:39.003 16:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:39.003 mke2fs 1.47.0 (5-Feb-2023) 00:11:39.263 Discarding device blocks: 0/522240 done 00:11:39.263 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:39.263 Filesystem UUID: a91ac90f-4b75-479f-9158-49566d1a7e51 00:11:39.263 Superblock backups stored on blocks: 00:11:39.263 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:39.263 00:11:39.263 Allocating group tables: 0/64 done 00:11:39.263 Writing inode tables: 0/64 done 00:11:39.263 Creating journal (8192 blocks): done 00:11:41.784 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:41.784 00:11:41.784 16:54:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:41.784 16:54:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.367 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.367 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:48.367 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.367 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:48.367 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:48.367 16:54:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1846467 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.367 00:11:48.367 real 0m8.911s 00:11:48.367 user 0m0.029s 00:11:48.367 sys 0m0.080s 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.367 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 ************************************ 00:11:48.368 END TEST filesystem_in_capsule_ext4 00:11:48.368 ************************************ 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 ************************************ 00:11:48.368 START TEST filesystem_in_capsule_btrfs 00:11:48.368 ************************************ 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.368 btrfs-progs v6.8.1 00:11:48.368 See https://btrfs.readthedocs.io for more information. 00:11:48.368 00:11:48.368 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.368 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.368 this does not affect your deployments: 00:11:48.368 - DUP for metadata (-m dup) 00:11:48.368 - enabled no-holes (-O no-holes) 00:11:48.368 - enabled free-space-tree (-R free-space-tree) 00:11:48.368 00:11:48.368 Label: (null) 00:11:48.368 UUID: cd641d1b-1e67-46d7-bb2c-767adefa8c09 00:11:48.368 Node size: 16384 00:11:48.368 Sector size: 4096 (CPU page size: 4096) 00:11:48.368 Filesystem size: 510.00MiB 00:11:48.368 Block group profiles: 00:11:48.368 Data: single 8.00MiB 00:11:48.368 Metadata: DUP 32.00MiB 00:11:48.368 System: DUP 8.00MiB 00:11:48.368 SSD detected: yes 00:11:48.368 Zoned device: no 00:11:48.368 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.368 Checksum: crc32c 00:11:48.368 Number of devices: 1 00:11:48.368 Devices: 00:11:48.368 ID SIZE PATH 00:11:48.368 1 510.00MiB /dev/nvme0n1p1 00:11:48.368 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:48.368 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.628 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.628 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:48.628 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.628 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1846467 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.890 00:11:48.890 real 0m0.721s 00:11:48.890 user 0m0.029s 00:11:48.890 sys 0m0.117s 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.890 ************************************ 00:11:48.890 END TEST filesystem_in_capsule_btrfs 00:11:48.890 ************************************ 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.890 ************************************ 00:11:48.890 START TEST filesystem_in_capsule_xfs 00:11:48.890 ************************************ 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:48.890 16:54:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:48.890 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:48.890 = sectsz=512 attr=2, projid32bit=1 00:11:48.890 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:48.890 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:48.890 data = bsize=4096 blocks=130560, imaxpct=25 00:11:48.890 = sunit=0 swidth=0 blks 00:11:48.890 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:48.890 log =internal log bsize=4096 blocks=16384, version=2 00:11:48.890 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:48.890 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:50.275 Discarding blocks...Done. 00:11:50.275 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:50.275 16:54:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.188 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.188 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:52.188 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.188 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1846467 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.188 00:11:52.188 real 0m3.120s 00:11:52.188 user 0m0.023s 00:11:52.188 sys 0m0.082s 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.188 ************************************ 00:11:52.188 END TEST filesystem_in_capsule_xfs 00:11:52.188 ************************************ 00:11:52.188 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:52.450 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:52.710 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1846467 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1846467 ']' 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1846467 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.971 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1846467 00:11:52.971 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.971 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.971 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1846467' 00:11:52.971 killing process with pid 1846467 00:11:52.971 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1846467 00:11:52.971 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1846467 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.232 00:11:53.232 real 0m20.856s 00:11:53.232 user 1m22.585s 00:11:53.232 sys 0m1.403s 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.232 ************************************ 00:11:53.232 END TEST nvmf_filesystem_in_capsule 00:11:53.232 ************************************ 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.232 rmmod nvme_tcp 00:11:53.232 rmmod nvme_fabrics 00:11:53.232 rmmod nvme_keyring 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.232 16:54:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.773 00:11:55.773 real 0m49.873s 00:11:55.773 user 2m38.474s 00:11:55.773 sys 0m8.865s 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:55.773 ************************************ 00:11:55.773 END TEST nvmf_filesystem 00:11:55.773 ************************************ 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.773 ************************************ 00:11:55.773 START TEST nvmf_target_discovery 00:11:55.773 ************************************ 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:55.773 * Looking for test storage... 00:11:55.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:55.773 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.774 --rc genhtml_branch_coverage=1 00:11:55.774 --rc genhtml_function_coverage=1 00:11:55.774 --rc genhtml_legend=1 00:11:55.774 --rc geninfo_all_blocks=1 00:11:55.774 --rc geninfo_unexecuted_blocks=1 00:11:55.774 00:11:55.774 ' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.774 --rc genhtml_branch_coverage=1 00:11:55.774 --rc genhtml_function_coverage=1 00:11:55.774 --rc genhtml_legend=1 00:11:55.774 --rc geninfo_all_blocks=1 00:11:55.774 --rc geninfo_unexecuted_blocks=1 00:11:55.774 00:11:55.774 ' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.774 --rc genhtml_branch_coverage=1 00:11:55.774 --rc genhtml_function_coverage=1 00:11:55.774 --rc genhtml_legend=1 00:11:55.774 --rc geninfo_all_blocks=1 00:11:55.774 --rc geninfo_unexecuted_blocks=1 00:11:55.774 00:11:55.774 ' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.774 --rc genhtml_branch_coverage=1 00:11:55.774 --rc genhtml_function_coverage=1 00:11:55.774 --rc genhtml_legend=1 00:11:55.774 --rc geninfo_all_blocks=1 00:11:55.774 --rc geninfo_unexecuted_blocks=1 00:11:55.774 00:11:55.774 ' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.774 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.775 16:54:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.905 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:03.906 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:03.906 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:03.906 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:03.906 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.906 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:12:03.906 00:12:03.906 --- 10.0.0.2 ping statistics --- 00:12:03.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.906 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:12:03.906 00:12:03.906 --- 10.0.0.1 ping statistics --- 00:12:03.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.906 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:03.906 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1854928 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1854928 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1854928 ']' 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.907 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 [2024-11-20 16:54:55.337493] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:12:03.907 [2024-11-20 16:54:55.337568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.907 [2024-11-20 16:54:55.439196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.907 [2024-11-20 16:54:55.494195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.907 [2024-11-20 16:54:55.494254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.907 [2024-11-20 16:54:55.494263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.907 [2024-11-20 16:54:55.494270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.907 [2024-11-20 16:54:55.494277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.907 [2024-11-20 16:54:55.496275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.907 [2024-11-20 16:54:55.496447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.907 [2024-11-20 16:54:55.496689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.907 [2024-11-20 16:54:55.496691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 [2024-11-20 16:54:56.203151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 Null1 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 [2024-11-20 16:54:56.274412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.166 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 Null2 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 Null3 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.167 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 Null4 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:04.429 00:12:04.429 Discovery Log Number of Records 6, Generation counter 6 00:12:04.429 =====Discovery Log Entry 0====== 00:12:04.429 trtype: tcp 00:12:04.429 adrfam: ipv4 00:12:04.429 subtype: current discovery subsystem 00:12:04.429 treq: not required 00:12:04.429 portid: 0 00:12:04.429 trsvcid: 4420 00:12:04.429 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:04.429 traddr: 10.0.0.2 00:12:04.429 eflags: explicit discovery connections, duplicate discovery information 00:12:04.429 sectype: none 00:12:04.429 =====Discovery Log Entry 1====== 00:12:04.429 trtype: tcp 00:12:04.429 adrfam: ipv4 00:12:04.429 subtype: nvme subsystem 00:12:04.429 treq: not required 00:12:04.429 portid: 0 00:12:04.429 trsvcid: 4420 00:12:04.429 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:04.429 traddr: 10.0.0.2 00:12:04.429 eflags: none 00:12:04.429 sectype: none 00:12:04.429 =====Discovery Log Entry 2====== 00:12:04.429 trtype: tcp 00:12:04.429 adrfam: ipv4 00:12:04.429 subtype: nvme subsystem 00:12:04.429 treq: not required 00:12:04.429 portid: 0 00:12:04.429 trsvcid: 4420 00:12:04.429 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:04.429 traddr: 10.0.0.2 00:12:04.429 eflags: none 00:12:04.429 sectype: none 00:12:04.429 =====Discovery Log Entry 3====== 00:12:04.429 trtype: tcp 00:12:04.429 adrfam: ipv4 00:12:04.429 subtype: nvme subsystem 00:12:04.429 treq: not required 00:12:04.429 portid: 0 00:12:04.429 trsvcid: 4420 00:12:04.429 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:04.429 traddr: 10.0.0.2 00:12:04.429 eflags: none 00:12:04.429 sectype: none 00:12:04.429 =====Discovery Log Entry 4====== 00:12:04.429 trtype: tcp 00:12:04.429 adrfam: ipv4 00:12:04.429 subtype: nvme subsystem 00:12:04.429 treq: not required 00:12:04.429 portid: 0 00:12:04.429 trsvcid: 4420 00:12:04.429 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:04.429 traddr: 10.0.0.2 00:12:04.429 eflags: none 00:12:04.429 sectype: none 00:12:04.429 =====Discovery Log Entry 5====== 00:12:04.429 trtype: tcp 00:12:04.429 adrfam: ipv4 00:12:04.429 subtype: discovery subsystem referral 00:12:04.429 treq: not required 00:12:04.429 portid: 0 00:12:04.429 trsvcid: 4430 00:12:04.429 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:04.429 traddr: 10.0.0.2 00:12:04.429 eflags: none 00:12:04.429 sectype: none 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:04.429 Perform nvmf subsystem discovery via RPC 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.429 [ 00:12:04.429 { 00:12:04.429 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:04.429 "subtype": "Discovery", 00:12:04.429 "listen_addresses": [ 00:12:04.429 { 00:12:04.429 "trtype": "TCP", 00:12:04.429 "adrfam": "IPv4", 00:12:04.429 "traddr": "10.0.0.2", 00:12:04.429 "trsvcid": "4420" 00:12:04.429 } 00:12:04.429 ], 00:12:04.429 "allow_any_host": true, 00:12:04.429 "hosts": [] 00:12:04.429 }, 00:12:04.429 { 00:12:04.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.429 "subtype": "NVMe", 00:12:04.429 "listen_addresses": [ 00:12:04.429 { 00:12:04.429 "trtype": "TCP", 00:12:04.429 "adrfam": "IPv4", 00:12:04.429 "traddr": "10.0.0.2", 00:12:04.429 "trsvcid": "4420" 00:12:04.429 } 00:12:04.429 ], 00:12:04.429 "allow_any_host": true, 00:12:04.429 "hosts": [], 00:12:04.429 "serial_number": "SPDK00000000000001", 00:12:04.429 "model_number": "SPDK bdev Controller", 00:12:04.429 "max_namespaces": 32, 00:12:04.429 "min_cntlid": 1, 00:12:04.429 "max_cntlid": 65519, 00:12:04.429 "namespaces": [ 00:12:04.429 { 00:12:04.429 "nsid": 1, 00:12:04.429 "bdev_name": "Null1", 00:12:04.429 "name": "Null1", 00:12:04.429 "nguid": "8BF2F9134972400CA60D6A859093B335", 00:12:04.429 "uuid": "8bf2f913-4972-400c-a60d-6a859093b335" 00:12:04.429 } 00:12:04.429 ] 00:12:04.429 }, 00:12:04.429 { 00:12:04.429 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:04.429 "subtype": "NVMe", 00:12:04.429 "listen_addresses": [ 00:12:04.429 { 00:12:04.429 "trtype": "TCP", 00:12:04.429 "adrfam": "IPv4", 00:12:04.429 "traddr": "10.0.0.2", 00:12:04.429 "trsvcid": "4420" 00:12:04.429 } 00:12:04.429 ], 00:12:04.429 "allow_any_host": true, 00:12:04.429 "hosts": [], 00:12:04.429 "serial_number": "SPDK00000000000002", 00:12:04.429 "model_number": "SPDK bdev Controller", 00:12:04.429 "max_namespaces": 32, 00:12:04.429 "min_cntlid": 1, 00:12:04.429 "max_cntlid": 65519, 00:12:04.429 "namespaces": [ 00:12:04.429 { 00:12:04.429 "nsid": 1, 00:12:04.429 "bdev_name": "Null2", 00:12:04.429 "name": "Null2", 00:12:04.429 "nguid": "48D7140B60F24368B24FEF19F6FAE31D", 00:12:04.429 "uuid": "48d7140b-60f2-4368-b24f-ef19f6fae31d" 00:12:04.429 } 00:12:04.429 ] 00:12:04.429 }, 00:12:04.429 { 00:12:04.429 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:04.429 "subtype": "NVMe", 00:12:04.429 "listen_addresses": [ 00:12:04.429 { 00:12:04.429 "trtype": "TCP", 00:12:04.429 "adrfam": "IPv4", 00:12:04.429 "traddr": "10.0.0.2", 00:12:04.429 "trsvcid": "4420" 00:12:04.429 } 00:12:04.429 ], 00:12:04.429 "allow_any_host": true, 00:12:04.429 "hosts": [], 00:12:04.429 "serial_number": "SPDK00000000000003", 00:12:04.429 "model_number": "SPDK bdev Controller", 00:12:04.429 "max_namespaces": 32, 00:12:04.429 "min_cntlid": 1, 00:12:04.429 "max_cntlid": 65519, 00:12:04.429 "namespaces": [ 00:12:04.429 { 00:12:04.429 "nsid": 1, 00:12:04.429 "bdev_name": "Null3", 00:12:04.429 "name": "Null3", 00:12:04.429 "nguid": "122E1C6DE20C4FCFB337FD2E6F6A7707", 00:12:04.429 "uuid": "122e1c6d-e20c-4fcf-b337-fd2e6f6a7707" 00:12:04.429 } 00:12:04.429 ] 00:12:04.429 }, 00:12:04.429 { 00:12:04.429 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:04.429 "subtype": "NVMe", 00:12:04.429 "listen_addresses": [ 00:12:04.429 { 00:12:04.429 "trtype": "TCP", 00:12:04.429 "adrfam": "IPv4", 00:12:04.429 "traddr": "10.0.0.2", 00:12:04.429 "trsvcid": "4420" 00:12:04.429 } 00:12:04.429 ], 00:12:04.429 "allow_any_host": true, 00:12:04.429 "hosts": [], 00:12:04.429 "serial_number": "SPDK00000000000004", 00:12:04.429 "model_number": "SPDK bdev Controller", 00:12:04.429 "max_namespaces": 32, 00:12:04.429 "min_cntlid": 1, 00:12:04.429 "max_cntlid": 65519, 00:12:04.429 "namespaces": [ 00:12:04.429 { 00:12:04.429 "nsid": 1, 00:12:04.429 "bdev_name": "Null4", 00:12:04.429 "name": "Null4", 00:12:04.429 "nguid": "D01E27E34879425FAF1940B968A7CEFB", 00:12:04.429 "uuid": "d01e27e3-4879-425f-af19-40b968a7cefb" 00:12:04.429 } 00:12:04.429 ] 00:12:04.429 } 00:12:04.429 ] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:04.429 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.430 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:04.691 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.692 rmmod nvme_tcp 00:12:04.692 rmmod nvme_fabrics 00:12:04.692 rmmod nvme_keyring 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1854928 ']' 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1854928 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1854928 ']' 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1854928 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854928 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854928' 00:12:04.692 killing process with pid 1854928 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1854928 00:12:04.692 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1854928 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.953 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.497 00:12:07.497 real 0m11.625s 00:12:07.497 user 0m8.369s 00:12:07.497 sys 0m6.192s 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.497 ************************************ 00:12:07.497 END TEST nvmf_target_discovery 00:12:07.497 ************************************ 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.497 ************************************ 00:12:07.497 START TEST nvmf_referrals 00:12:07.497 ************************************ 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:07.497 * Looking for test storage... 00:12:07.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.497 --rc genhtml_branch_coverage=1 00:12:07.497 --rc genhtml_function_coverage=1 00:12:07.497 --rc genhtml_legend=1 00:12:07.497 --rc geninfo_all_blocks=1 00:12:07.497 --rc geninfo_unexecuted_blocks=1 00:12:07.497 00:12:07.497 ' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.497 --rc genhtml_branch_coverage=1 00:12:07.497 --rc genhtml_function_coverage=1 00:12:07.497 --rc genhtml_legend=1 00:12:07.497 --rc geninfo_all_blocks=1 00:12:07.497 --rc geninfo_unexecuted_blocks=1 00:12:07.497 00:12:07.497 ' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.497 --rc genhtml_branch_coverage=1 00:12:07.497 --rc genhtml_function_coverage=1 00:12:07.497 --rc genhtml_legend=1 00:12:07.497 --rc geninfo_all_blocks=1 00:12:07.497 --rc geninfo_unexecuted_blocks=1 00:12:07.497 00:12:07.497 ' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.497 --rc genhtml_branch_coverage=1 00:12:07.497 --rc genhtml_function_coverage=1 00:12:07.497 --rc genhtml_legend=1 00:12:07.497 --rc geninfo_all_blocks=1 00:12:07.497 --rc geninfo_unexecuted_blocks=1 00:12:07.497 00:12:07.497 ' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.497 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.498 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:15.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:15.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:15.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:15.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.638 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:12:15.639 00:12:15.639 --- 10.0.0.2 ping statistics --- 00:12:15.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.639 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:12:15.639 00:12:15.639 --- 10.0.0.1 ping statistics --- 00:12:15.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.639 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1859416 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1859416 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1859416 ']' 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.639 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.639 [2024-11-20 16:55:07.014358] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:12:15.639 [2024-11-20 16:55:07.014426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.639 [2024-11-20 16:55:07.114406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.639 [2024-11-20 16:55:07.167499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.639 [2024-11-20 16:55:07.167550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.639 [2024-11-20 16:55:07.167559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.639 [2024-11-20 16:55:07.167572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.639 [2024-11-20 16:55:07.167578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.639 [2024-11-20 16:55:07.169680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.639 [2024-11-20 16:55:07.169839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.639 [2024-11-20 16:55:07.170001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.639 [2024-11-20 16:55:07.170002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 [2024-11-20 16:55:07.884254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 [2024-11-20 16:55:07.900576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.901 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.163 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:16.423 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.684 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:16.945 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:16.945 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:16.945 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:16.946 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:16.946 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.946 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.206 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.466 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:17.726 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.727 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.987 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.987 rmmod nvme_tcp 00:12:17.987 rmmod nvme_fabrics 00:12:17.987 rmmod nvme_keyring 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1859416 ']' 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1859416 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1859416 ']' 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1859416 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1859416 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1859416' 00:12:18.248 killing process with pid 1859416 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1859416 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1859416 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.248 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.894 00:12:20.894 real 0m13.268s 00:12:20.894 user 0m15.937s 00:12:20.894 sys 0m6.543s 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.894 ************************************ 00:12:20.894 END TEST nvmf_referrals 00:12:20.894 ************************************ 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.894 ************************************ 00:12:20.894 START TEST nvmf_connect_disconnect 00:12:20.894 ************************************ 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:20.894 * Looking for test storage... 00:12:20.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.894 --rc genhtml_branch_coverage=1 00:12:20.894 --rc genhtml_function_coverage=1 00:12:20.894 --rc genhtml_legend=1 00:12:20.894 --rc geninfo_all_blocks=1 00:12:20.894 --rc geninfo_unexecuted_blocks=1 00:12:20.894 00:12:20.894 ' 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.894 --rc genhtml_branch_coverage=1 00:12:20.894 --rc genhtml_function_coverage=1 00:12:20.894 --rc genhtml_legend=1 00:12:20.894 --rc geninfo_all_blocks=1 00:12:20.894 --rc geninfo_unexecuted_blocks=1 00:12:20.894 00:12:20.894 ' 00:12:20.894 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.895 --rc genhtml_branch_coverage=1 00:12:20.895 --rc genhtml_function_coverage=1 00:12:20.895 --rc genhtml_legend=1 00:12:20.895 --rc geninfo_all_blocks=1 00:12:20.895 --rc geninfo_unexecuted_blocks=1 00:12:20.895 00:12:20.895 ' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.895 --rc genhtml_branch_coverage=1 00:12:20.895 --rc genhtml_function_coverage=1 00:12:20.895 --rc genhtml_legend=1 00:12:20.895 --rc geninfo_all_blocks=1 00:12:20.895 --rc geninfo_unexecuted_blocks=1 00:12:20.895 00:12:20.895 ' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.895 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.034 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:29.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:29.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:29.035 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:29.035 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.035 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:12:29.035 00:12:29.035 --- 10.0.0.2 ping statistics --- 00:12:29.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.035 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:12:29.035 00:12:29.035 --- 10.0.0.1 ping statistics --- 00:12:29.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.035 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1864401 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1864401 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1864401 ']' 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.035 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.035 [2024-11-20 16:55:20.286824] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:12:29.035 [2024-11-20 16:55:20.286891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.035 [2024-11-20 16:55:20.386680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.036 [2024-11-20 16:55:20.442065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.036 [2024-11-20 16:55:20.442119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.036 [2024-11-20 16:55:20.442127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.036 [2024-11-20 16:55:20.442136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.036 [2024-11-20 16:55:20.442142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.036 [2024-11-20 16:55:20.444214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.036 [2024-11-20 16:55:20.444333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.036 [2024-11-20 16:55:20.444546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.036 [2024-11-20 16:55:20.444548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.036 [2024-11-20 16:55:21.169503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.036 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.296 [2024-11-20 16:55:21.247549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:29.296 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:33.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.596 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.596 rmmod nvme_tcp 00:12:47.596 rmmod nvme_fabrics 00:12:47.596 rmmod nvme_keyring 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1864401 ']' 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1864401 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1864401 ']' 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1864401 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1864401 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1864401' 00:12:47.597 killing process with pid 1864401 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1864401 00:12:47.597 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1864401 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.857 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.767 00:12:49.767 real 0m29.349s 00:12:49.767 user 1m19.249s 00:12:49.767 sys 0m7.111s 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.767 ************************************ 00:12:49.767 END TEST nvmf_connect_disconnect 00:12:49.767 ************************************ 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.767 16:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.028 ************************************ 00:12:50.028 START TEST nvmf_multitarget 00:12:50.028 ************************************ 00:12:50.028 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:50.028 * Looking for test storage... 00:12:50.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.028 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.029 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.289 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:58.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:58.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:58.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:58.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.431 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:58.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:12:58.432 00:12:58.432 --- 10.0.0.2 ping statistics --- 00:12:58.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.432 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:58.432 00:12:58.432 --- 10.0.0.1 ping statistics --- 00:12:58.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.432 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1872329 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1872329 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1872329 ']' 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.432 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.432 [2024-11-20 16:55:49.732993] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:12:58.432 [2024-11-20 16:55:49.733066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.432 [2024-11-20 16:55:49.833168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.432 [2024-11-20 16:55:49.886342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.432 [2024-11-20 16:55:49.886396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.432 [2024-11-20 16:55:49.886405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.432 [2024-11-20 16:55:49.886412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.432 [2024-11-20 16:55:49.886419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.432 [2024-11-20 16:55:49.888457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.432 [2024-11-20 16:55:49.888619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.432 [2024-11-20 16:55:49.888786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.432 [2024-11-20 16:55:49.888788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.432 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:58.692 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.692 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:58.692 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:58.692 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:58.692 "nvmf_tgt_1" 00:12:58.692 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:58.952 "nvmf_tgt_2" 00:12:58.952 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.952 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:58.952 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:58.952 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:59.213 true 00:12:59.213 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:59.213 true 00:12:59.213 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:59.213 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:59.474 rmmod nvme_tcp 00:12:59.474 rmmod nvme_fabrics 00:12:59.474 rmmod nvme_keyring 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:59.474 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1872329 ']' 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1872329 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1872329 ']' 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1872329 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1872329 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1872329' 00:12:59.475 killing process with pid 1872329 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1872329 00:12:59.475 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1872329 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.736 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.648 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:01.648 00:13:01.648 real 0m11.830s 00:13:01.648 user 0m10.357s 00:13:01.648 sys 0m6.089s 00:13:01.648 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.648 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:01.648 ************************************ 00:13:01.648 END TEST nvmf_multitarget 00:13:01.648 ************************************ 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.908 ************************************ 00:13:01.908 START TEST nvmf_rpc 00:13:01.908 ************************************ 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:01.908 * Looking for test storage... 00:13:01.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:01.908 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.908 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:02.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.168 --rc genhtml_branch_coverage=1 00:13:02.168 --rc genhtml_function_coverage=1 00:13:02.168 --rc genhtml_legend=1 00:13:02.168 --rc geninfo_all_blocks=1 00:13:02.168 --rc geninfo_unexecuted_blocks=1 00:13:02.168 00:13:02.168 ' 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:02.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.168 --rc genhtml_branch_coverage=1 00:13:02.168 --rc genhtml_function_coverage=1 00:13:02.168 --rc genhtml_legend=1 00:13:02.168 --rc geninfo_all_blocks=1 00:13:02.168 --rc geninfo_unexecuted_blocks=1 00:13:02.168 00:13:02.168 ' 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:02.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.168 --rc genhtml_branch_coverage=1 00:13:02.168 --rc genhtml_function_coverage=1 00:13:02.168 --rc genhtml_legend=1 00:13:02.168 --rc geninfo_all_blocks=1 00:13:02.168 --rc geninfo_unexecuted_blocks=1 00:13:02.168 00:13:02.168 ' 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:02.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.168 --rc genhtml_branch_coverage=1 00:13:02.168 --rc genhtml_function_coverage=1 00:13:02.168 --rc genhtml_legend=1 00:13:02.168 --rc geninfo_all_blocks=1 00:13:02.168 --rc geninfo_unexecuted_blocks=1 00:13:02.168 00:13:02.168 ' 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.168 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:02.169 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:10.309 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:10.309 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:10.309 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:10.309 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:10.309 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:10.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:13:10.310 00:13:10.310 --- 10.0.0.2 ping statistics --- 00:13:10.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.310 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:13:10.310 00:13:10.310 --- 10.0.0.1 ping statistics --- 00:13:10.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.310 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1877009 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1877009 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1877009 ']' 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.310 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 [2024-11-20 16:56:01.730403] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:13:10.310 [2024-11-20 16:56:01.730474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.310 [2024-11-20 16:56:01.831713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.310 [2024-11-20 16:56:01.884528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.310 [2024-11-20 16:56:01.884578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.310 [2024-11-20 16:56:01.884587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.310 [2024-11-20 16:56:01.884594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.310 [2024-11-20 16:56:01.884601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.310 [2024-11-20 16:56:01.886675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.310 [2024-11-20 16:56:01.886837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.310 [2024-11-20 16:56:01.886999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.310 [2024-11-20 16:56:01.887000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:10.571 "tick_rate": 2400000000, 00:13:10.571 "poll_groups": [ 00:13:10.571 { 00:13:10.571 "name": "nvmf_tgt_poll_group_000", 00:13:10.571 "admin_qpairs": 0, 00:13:10.571 "io_qpairs": 0, 00:13:10.571 "current_admin_qpairs": 0, 00:13:10.571 "current_io_qpairs": 0, 00:13:10.571 "pending_bdev_io": 0, 00:13:10.571 "completed_nvme_io": 0, 00:13:10.571 "transports": [] 00:13:10.571 }, 00:13:10.571 { 00:13:10.571 "name": "nvmf_tgt_poll_group_001", 00:13:10.571 "admin_qpairs": 0, 00:13:10.571 "io_qpairs": 0, 00:13:10.571 "current_admin_qpairs": 0, 00:13:10.571 "current_io_qpairs": 0, 00:13:10.571 "pending_bdev_io": 0, 00:13:10.571 "completed_nvme_io": 0, 00:13:10.571 "transports": [] 00:13:10.571 }, 00:13:10.571 { 00:13:10.571 "name": "nvmf_tgt_poll_group_002", 00:13:10.571 "admin_qpairs": 0, 00:13:10.571 "io_qpairs": 0, 00:13:10.571 "current_admin_qpairs": 0, 00:13:10.571 "current_io_qpairs": 0, 00:13:10.571 "pending_bdev_io": 0, 00:13:10.571 "completed_nvme_io": 0, 00:13:10.571 "transports": [] 00:13:10.571 }, 00:13:10.571 { 00:13:10.571 "name": "nvmf_tgt_poll_group_003", 00:13:10.571 "admin_qpairs": 0, 00:13:10.571 "io_qpairs": 0, 00:13:10.571 "current_admin_qpairs": 0, 00:13:10.571 "current_io_qpairs": 0, 00:13:10.571 "pending_bdev_io": 0, 00:13:10.571 "completed_nvme_io": 0, 00:13:10.571 "transports": [] 00:13:10.571 } 00:13:10.571 ] 00:13:10.571 }' 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:10.571 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.572 [2024-11-20 16:56:02.708643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:10.572 "tick_rate": 2400000000, 00:13:10.572 "poll_groups": [ 00:13:10.572 { 00:13:10.572 "name": "nvmf_tgt_poll_group_000", 00:13:10.572 "admin_qpairs": 0, 00:13:10.572 "io_qpairs": 0, 00:13:10.572 "current_admin_qpairs": 0, 00:13:10.572 "current_io_qpairs": 0, 00:13:10.572 "pending_bdev_io": 0, 00:13:10.572 "completed_nvme_io": 0, 00:13:10.572 "transports": [ 00:13:10.572 { 00:13:10.572 "trtype": "TCP" 00:13:10.572 } 00:13:10.572 ] 00:13:10.572 }, 00:13:10.572 { 00:13:10.572 "name": "nvmf_tgt_poll_group_001", 00:13:10.572 "admin_qpairs": 0, 00:13:10.572 "io_qpairs": 0, 00:13:10.572 "current_admin_qpairs": 0, 00:13:10.572 "current_io_qpairs": 0, 00:13:10.572 "pending_bdev_io": 0, 00:13:10.572 "completed_nvme_io": 0, 00:13:10.572 "transports": [ 00:13:10.572 { 00:13:10.572 "trtype": "TCP" 00:13:10.572 } 00:13:10.572 ] 00:13:10.572 }, 00:13:10.572 { 00:13:10.572 "name": "nvmf_tgt_poll_group_002", 00:13:10.572 "admin_qpairs": 0, 00:13:10.572 "io_qpairs": 0, 00:13:10.572 "current_admin_qpairs": 0, 00:13:10.572 "current_io_qpairs": 0, 00:13:10.572 "pending_bdev_io": 0, 00:13:10.572 "completed_nvme_io": 0, 00:13:10.572 "transports": [ 00:13:10.572 { 00:13:10.572 "trtype": "TCP" 00:13:10.572 } 00:13:10.572 ] 00:13:10.572 }, 00:13:10.572 { 00:13:10.572 "name": "nvmf_tgt_poll_group_003", 00:13:10.572 "admin_qpairs": 0, 00:13:10.572 "io_qpairs": 0, 00:13:10.572 "current_admin_qpairs": 0, 00:13:10.572 "current_io_qpairs": 0, 00:13:10.572 "pending_bdev_io": 0, 00:13:10.572 "completed_nvme_io": 0, 00:13:10.572 "transports": [ 00:13:10.572 { 00:13:10.572 "trtype": "TCP" 00:13:10.572 } 00:13:10.572 ] 00:13:10.572 } 00:13:10.572 ] 00:13:10.572 }' 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:10.572 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:10.833 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.834 Malloc1 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.834 [2024-11-20 16:56:02.926538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:10.834 [2024-11-20 16:56:02.963613] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:10.834 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:10.834 could not add new controller: failed to write to nvme-fabrics device 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.834 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.834 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.834 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.741 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.741 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.741 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.741 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:12.741 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.651 [2024-11-20 16:56:06.749232] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:14.651 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:14.651 could not add new controller: failed to write to nvme-fabrics device 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.651 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.563 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.563 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.563 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.563 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.563 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.476 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.477 [2024-11-20 16:56:10.479092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.477 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.862 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.862 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.862 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.862 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:19.862 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.403 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.403 [2024-11-20 16:56:14.199421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.403 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.786 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.786 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:23.786 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.786 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:23.786 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.706 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.967 [2024-11-20 16:56:17.882842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.967 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.349 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.349 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.349 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.349 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:27.349 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:29.260 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.521 [2024-11-20 16:56:21.612353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.521 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.431 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.431 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.431 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.431 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:31.431 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 [2024-11-20 16:56:25.404726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.880 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.880 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.880 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.880 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:34.880 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.429 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.430 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.430 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 [2024-11-20 16:56:29.158424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 [2024-11-20 16:56:29.230597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 [2024-11-20 16:56:29.298773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.430 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 [2024-11-20 16:56:29.370993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 [2024-11-20 16:56:29.435193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:37.431 "tick_rate": 2400000000, 00:13:37.431 "poll_groups": [ 00:13:37.431 { 00:13:37.431 "name": "nvmf_tgt_poll_group_000", 00:13:37.431 "admin_qpairs": 0, 00:13:37.431 "io_qpairs": 224, 00:13:37.431 "current_admin_qpairs": 0, 00:13:37.431 "current_io_qpairs": 0, 00:13:37.431 "pending_bdev_io": 0, 00:13:37.431 "completed_nvme_io": 312, 00:13:37.431 "transports": [ 00:13:37.431 { 00:13:37.431 "trtype": "TCP" 00:13:37.431 } 00:13:37.431 ] 00:13:37.431 }, 00:13:37.431 { 00:13:37.431 "name": "nvmf_tgt_poll_group_001", 00:13:37.431 "admin_qpairs": 1, 00:13:37.431 "io_qpairs": 223, 00:13:37.431 "current_admin_qpairs": 0, 00:13:37.431 "current_io_qpairs": 0, 00:13:37.431 "pending_bdev_io": 0, 00:13:37.431 "completed_nvme_io": 435, 00:13:37.431 "transports": [ 00:13:37.431 { 00:13:37.431 "trtype": "TCP" 00:13:37.431 } 00:13:37.431 ] 00:13:37.431 }, 00:13:37.431 { 00:13:37.431 "name": "nvmf_tgt_poll_group_002", 00:13:37.431 "admin_qpairs": 6, 00:13:37.431 "io_qpairs": 218, 00:13:37.431 "current_admin_qpairs": 0, 00:13:37.431 "current_io_qpairs": 0, 00:13:37.431 "pending_bdev_io": 0, 00:13:37.431 "completed_nvme_io": 218, 00:13:37.431 "transports": [ 00:13:37.431 { 00:13:37.431 "trtype": "TCP" 00:13:37.431 } 00:13:37.431 ] 00:13:37.431 }, 00:13:37.431 { 00:13:37.431 "name": "nvmf_tgt_poll_group_003", 00:13:37.431 "admin_qpairs": 0, 00:13:37.431 "io_qpairs": 224, 00:13:37.431 "current_admin_qpairs": 0, 00:13:37.431 "current_io_qpairs": 0, 00:13:37.431 "pending_bdev_io": 0, 00:13:37.431 "completed_nvme_io": 274, 00:13:37.431 "transports": [ 00:13:37.431 { 00:13:37.431 "trtype": "TCP" 00:13:37.431 } 00:13:37.431 ] 00:13:37.431 } 00:13:37.431 ] 00:13:37.431 }' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:37.431 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:37.692 rmmod nvme_tcp 00:13:37.692 rmmod nvme_fabrics 00:13:37.692 rmmod nvme_keyring 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1877009 ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1877009 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1877009 ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1877009 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1877009 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1877009' 00:13:37.692 killing process with pid 1877009 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1877009 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1877009 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:37.692 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:37.952 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:37.952 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:37.952 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.952 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.952 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.866 00:13:39.866 real 0m38.060s 00:13:39.866 user 1m53.888s 00:13:39.866 sys 0m7.906s 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.866 ************************************ 00:13:39.866 END TEST nvmf_rpc 00:13:39.866 ************************************ 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.866 16:56:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.866 ************************************ 00:13:39.866 START TEST nvmf_invalid 00:13:39.866 ************************************ 00:13:39.866 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:40.127 * Looking for test storage... 00:13:40.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:40.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.127 --rc genhtml_branch_coverage=1 00:13:40.127 --rc genhtml_function_coverage=1 00:13:40.127 --rc genhtml_legend=1 00:13:40.127 --rc geninfo_all_blocks=1 00:13:40.127 --rc geninfo_unexecuted_blocks=1 00:13:40.127 00:13:40.127 ' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:40.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.127 --rc genhtml_branch_coverage=1 00:13:40.127 --rc genhtml_function_coverage=1 00:13:40.127 --rc genhtml_legend=1 00:13:40.127 --rc geninfo_all_blocks=1 00:13:40.127 --rc geninfo_unexecuted_blocks=1 00:13:40.127 00:13:40.127 ' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:40.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.127 --rc genhtml_branch_coverage=1 00:13:40.127 --rc genhtml_function_coverage=1 00:13:40.127 --rc genhtml_legend=1 00:13:40.127 --rc geninfo_all_blocks=1 00:13:40.127 --rc geninfo_unexecuted_blocks=1 00:13:40.127 00:13:40.127 ' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:40.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.127 --rc genhtml_branch_coverage=1 00:13:40.127 --rc genhtml_function_coverage=1 00:13:40.127 --rc genhtml_legend=1 00:13:40.127 --rc geninfo_all_blocks=1 00:13:40.127 --rc geninfo_unexecuted_blocks=1 00:13:40.127 00:13:40.127 ' 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.127 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:40.128 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.272 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:48.273 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:48.273 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:48.273 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:48.273 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:13:48.273 00:13:48.273 --- 10.0.0.2 ping statistics --- 00:13:48.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.273 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:13:48.273 00:13:48.273 --- 10.0.0.1 ping statistics --- 00:13:48.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.273 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1886713 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1886713 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1886713 ']' 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.273 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 [2024-11-20 16:56:39.776050] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:13:48.274 [2024-11-20 16:56:39.776123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.274 [2024-11-20 16:56:39.851873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.274 [2024-11-20 16:56:39.899734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.274 [2024-11-20 16:56:39.899784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.274 [2024-11-20 16:56:39.899791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.274 [2024-11-20 16:56:39.899797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.274 [2024-11-20 16:56:39.899802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.274 [2024-11-20 16:56:39.901544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.274 [2024-11-20 16:56:39.901708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.274 [2024-11-20 16:56:39.901874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.274 [2024-11-20 16:56:39.901875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17669 00:13:48.274 [2024-11-20 16:56:40.229214] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:48.274 { 00:13:48.274 "nqn": "nqn.2016-06.io.spdk:cnode17669", 00:13:48.274 "tgt_name": "foobar", 00:13:48.274 "method": "nvmf_create_subsystem", 00:13:48.274 "req_id": 1 00:13:48.274 } 00:13:48.274 Got JSON-RPC error response 00:13:48.274 response: 00:13:48.274 { 00:13:48.274 "code": -32603, 00:13:48.274 "message": "Unable to find target foobar" 00:13:48.274 }' 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:48.274 { 00:13:48.274 "nqn": "nqn.2016-06.io.spdk:cnode17669", 00:13:48.274 "tgt_name": "foobar", 00:13:48.274 "method": "nvmf_create_subsystem", 00:13:48.274 "req_id": 1 00:13:48.274 } 00:13:48.274 Got JSON-RPC error response 00:13:48.274 response: 00:13:48.274 { 00:13:48.274 "code": -32603, 00:13:48.274 "message": "Unable to find target foobar" 00:13:48.274 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:48.274 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14391 00:13:48.274 [2024-11-20 16:56:40.434000] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14391: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:48.534 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:48.534 { 00:13:48.534 "nqn": "nqn.2016-06.io.spdk:cnode14391", 00:13:48.534 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:48.534 "method": "nvmf_create_subsystem", 00:13:48.534 "req_id": 1 00:13:48.534 } 00:13:48.534 Got JSON-RPC error response 00:13:48.534 response: 00:13:48.534 { 00:13:48.534 "code": -32602, 00:13:48.534 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:48.534 }' 00:13:48.534 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:48.534 { 00:13:48.534 "nqn": "nqn.2016-06.io.spdk:cnode14391", 00:13:48.534 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:48.534 "method": "nvmf_create_subsystem", 00:13:48.534 "req_id": 1 00:13:48.534 } 00:13:48.535 Got JSON-RPC error response 00:13:48.535 response: 00:13:48.535 { 00:13:48.535 "code": -32602, 00:13:48.535 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:48.535 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10148 00:13:48.535 [2024-11-20 16:56:40.642732] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10148: invalid model number 'SPDK_Controller' 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:48.535 { 00:13:48.535 "nqn": "nqn.2016-06.io.spdk:cnode10148", 00:13:48.535 "model_number": "SPDK_Controller\u001f", 00:13:48.535 "method": "nvmf_create_subsystem", 00:13:48.535 "req_id": 1 00:13:48.535 } 00:13:48.535 Got JSON-RPC error response 00:13:48.535 response: 00:13:48.535 { 00:13:48.535 "code": -32602, 00:13:48.535 "message": "Invalid MN SPDK_Controller\u001f" 00:13:48.535 }' 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:48.535 { 00:13:48.535 "nqn": "nqn.2016-06.io.spdk:cnode10148", 00:13:48.535 "model_number": "SPDK_Controller\u001f", 00:13:48.535 "method": "nvmf_create_subsystem", 00:13:48.535 "req_id": 1 00:13:48.535 } 00:13:48.535 Got JSON-RPC error response 00:13:48.535 response: 00:13:48.535 { 00:13:48.535 "code": -32602, 00:13:48.535 "message": "Invalid MN SPDK_Controller\u001f" 00:13:48.535 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.535 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:48.797 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:48.798 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:48.798 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:48.798 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:48.798 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:13:48.798 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'k$%&>oj/VP?j4DP>J8qk' 00:13:48.798 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'k$%&>oj/VP?j4DP>J8qk' nqn.2016-06.io.spdk:cnode12211 00:13:49.060 [2024-11-20 16:56:41.028229] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12211: invalid serial number 'k$%&>oj/VP?j4DP>J8qk' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:49.060 { 00:13:49.060 "nqn": "nqn.2016-06.io.spdk:cnode12211", 00:13:49.060 "serial_number": "k$%&>oj/VP?j4DP>J8\u007fqk", 00:13:49.060 "method": "nvmf_create_subsystem", 00:13:49.060 "req_id": 1 00:13:49.060 } 00:13:49.060 Got JSON-RPC error response 00:13:49.060 response: 00:13:49.060 { 00:13:49.060 "code": -32602, 00:13:49.060 "message": "Invalid SN k$%&>oj/VP?j4DP>J8\u007fqk" 00:13:49.060 }' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:49.060 { 00:13:49.060 "nqn": "nqn.2016-06.io.spdk:cnode12211", 00:13:49.060 "serial_number": "k$%&>oj/VP?j4DP>J8\u007fqk", 00:13:49.060 "method": "nvmf_create_subsystem", 00:13:49.060 "req_id": 1 00:13:49.060 } 00:13:49.060 Got JSON-RPC error response 00:13:49.060 response: 00:13:49.060 { 00:13:49.060 "code": -32602, 00:13:49.060 "message": "Invalid SN k$%&>oj/VP?j4DP>J8\u007fqk" 00:13:49.060 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.060 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:49.061 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:49.322 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.322 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.322 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:49.322 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '(0L_lo0F()~L/Jw]ygsTJb0%hmbd$' 00:13:49.323 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '(0L_lo0F()~L/Jw]ygsTJb0%hmbd$' nqn.2016-06.io.spdk:cnode7546 00:13:49.585 [2024-11-20 16:56:41.574198] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7546: invalid model number '(0L_lo0F()~L/Jw]ygsTJb0%hmbd$' 00:13:49.585 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:49.585 { 00:13:49.585 "nqn": "nqn.2016-06.io.spdk:cnode7546", 00:13:49.585 "model_number": "(0L_lo0F()~L/Jw]ygsTJb0%hmbd$\u007f", 00:13:49.585 "method": "nvmf_create_subsystem", 00:13:49.585 "req_id": 1 00:13:49.585 } 00:13:49.585 Got JSON-RPC error response 00:13:49.585 response: 00:13:49.585 { 00:13:49.585 "code": -32602, 00:13:49.585 "message": "Invalid MN (0L_lo0F()~L/Jw]ygsTJb0%hmbd$\u007f" 00:13:49.585 }' 00:13:49.585 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:49.585 { 00:13:49.585 "nqn": "nqn.2016-06.io.spdk:cnode7546", 00:13:49.585 "model_number": "(0L_lo0F()~L/Jw]ygsTJb0%hmbd$\u007f", 00:13:49.585 "method": "nvmf_create_subsystem", 00:13:49.585 "req_id": 1 00:13:49.585 } 00:13:49.585 Got JSON-RPC error response 00:13:49.585 response: 00:13:49.585 { 00:13:49.585 "code": -32602, 00:13:49.585 "message": "Invalid MN (0L_lo0F()~L/Jw]ygsTJb0%hmbd$\u007f" 00:13:49.585 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:49.585 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:49.846 [2024-11-20 16:56:41.775050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.846 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:49.846 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:49.846 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:49.846 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:49.846 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:49.846 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:50.108 [2024-11-20 16:56:42.164188] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:50.108 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:50.108 { 00:13:50.108 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:50.108 "listen_address": { 00:13:50.108 "trtype": "tcp", 00:13:50.108 "traddr": "", 00:13:50.108 "trsvcid": "4421" 00:13:50.108 }, 00:13:50.108 "method": "nvmf_subsystem_remove_listener", 00:13:50.108 "req_id": 1 00:13:50.108 } 00:13:50.108 Got JSON-RPC error response 00:13:50.108 response: 00:13:50.108 { 00:13:50.108 "code": -32602, 00:13:50.108 "message": "Invalid parameters" 00:13:50.108 }' 00:13:50.108 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:50.108 { 00:13:50.108 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:50.108 "listen_address": { 00:13:50.108 "trtype": "tcp", 00:13:50.108 "traddr": "", 00:13:50.108 "trsvcid": "4421" 00:13:50.108 }, 00:13:50.108 "method": "nvmf_subsystem_remove_listener", 00:13:50.108 "req_id": 1 00:13:50.108 } 00:13:50.108 Got JSON-RPC error response 00:13:50.108 response: 00:13:50.108 { 00:13:50.108 "code": -32602, 00:13:50.108 "message": "Invalid parameters" 00:13:50.108 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:50.108 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11186 -i 0 00:13:50.369 [2024-11-20 16:56:42.348708] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11186: invalid cntlid range [0-65519] 00:13:50.369 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:50.369 { 00:13:50.369 "nqn": "nqn.2016-06.io.spdk:cnode11186", 00:13:50.369 "min_cntlid": 0, 00:13:50.369 "method": "nvmf_create_subsystem", 00:13:50.369 "req_id": 1 00:13:50.369 } 00:13:50.369 Got JSON-RPC error response 00:13:50.369 response: 00:13:50.369 { 00:13:50.369 "code": -32602, 00:13:50.369 "message": "Invalid cntlid range [0-65519]" 00:13:50.369 }' 00:13:50.369 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:50.369 { 00:13:50.369 "nqn": "nqn.2016-06.io.spdk:cnode11186", 00:13:50.369 "min_cntlid": 0, 00:13:50.369 "method": "nvmf_create_subsystem", 00:13:50.369 "req_id": 1 00:13:50.369 } 00:13:50.369 Got JSON-RPC error response 00:13:50.369 response: 00:13:50.369 { 00:13:50.369 "code": -32602, 00:13:50.369 "message": "Invalid cntlid range [0-65519]" 00:13:50.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:50.369 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14786 -i 65520 00:13:50.369 [2024-11-20 16:56:42.537300] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14786: invalid cntlid range [65520-65519] 00:13:50.631 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:50.631 { 00:13:50.631 "nqn": "nqn.2016-06.io.spdk:cnode14786", 00:13:50.631 "min_cntlid": 65520, 00:13:50.631 "method": "nvmf_create_subsystem", 00:13:50.631 "req_id": 1 00:13:50.631 } 00:13:50.631 Got JSON-RPC error response 00:13:50.631 response: 00:13:50.631 { 00:13:50.631 "code": -32602, 00:13:50.631 "message": "Invalid cntlid range [65520-65519]" 00:13:50.631 }' 00:13:50.631 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:50.631 { 00:13:50.631 "nqn": "nqn.2016-06.io.spdk:cnode14786", 00:13:50.631 "min_cntlid": 65520, 00:13:50.631 "method": "nvmf_create_subsystem", 00:13:50.631 "req_id": 1 00:13:50.631 } 00:13:50.631 Got JSON-RPC error response 00:13:50.631 response: 00:13:50.631 { 00:13:50.631 "code": -32602, 00:13:50.631 "message": "Invalid cntlid range [65520-65519]" 00:13:50.631 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:50.631 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30168 -I 0 00:13:50.631 [2024-11-20 16:56:42.721897] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30168: invalid cntlid range [1-0] 00:13:50.631 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:50.631 { 00:13:50.631 "nqn": "nqn.2016-06.io.spdk:cnode30168", 00:13:50.631 "max_cntlid": 0, 00:13:50.631 "method": "nvmf_create_subsystem", 00:13:50.631 "req_id": 1 00:13:50.631 } 00:13:50.631 Got JSON-RPC error response 00:13:50.631 response: 00:13:50.631 { 00:13:50.631 "code": -32602, 00:13:50.631 "message": "Invalid cntlid range [1-0]" 00:13:50.631 }' 00:13:50.631 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:50.631 { 00:13:50.631 "nqn": "nqn.2016-06.io.spdk:cnode30168", 00:13:50.631 "max_cntlid": 0, 00:13:50.631 "method": "nvmf_create_subsystem", 00:13:50.631 "req_id": 1 00:13:50.631 } 00:13:50.631 Got JSON-RPC error response 00:13:50.631 response: 00:13:50.631 { 00:13:50.631 "code": -32602, 00:13:50.631 "message": "Invalid cntlid range [1-0]" 00:13:50.631 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:50.631 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28293 -I 65520 00:13:50.892 [2024-11-20 16:56:42.902430] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28293: invalid cntlid range [1-65520] 00:13:50.892 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:50.892 { 00:13:50.892 "nqn": "nqn.2016-06.io.spdk:cnode28293", 00:13:50.892 "max_cntlid": 65520, 00:13:50.892 "method": "nvmf_create_subsystem", 00:13:50.892 "req_id": 1 00:13:50.892 } 00:13:50.892 Got JSON-RPC error response 00:13:50.892 response: 00:13:50.892 { 00:13:50.892 "code": -32602, 00:13:50.892 "message": "Invalid cntlid range [1-65520]" 00:13:50.892 }' 00:13:50.892 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:50.892 { 00:13:50.892 "nqn": "nqn.2016-06.io.spdk:cnode28293", 00:13:50.892 "max_cntlid": 65520, 00:13:50.892 "method": "nvmf_create_subsystem", 00:13:50.892 "req_id": 1 00:13:50.892 } 00:13:50.892 Got JSON-RPC error response 00:13:50.892 response: 00:13:50.892 { 00:13:50.892 "code": -32602, 00:13:50.892 "message": "Invalid cntlid range [1-65520]" 00:13:50.892 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:50.892 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23466 -i 6 -I 5 00:13:51.153 [2024-11-20 16:56:43.087028] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23466: invalid cntlid range [6-5] 00:13:51.153 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:51.153 { 00:13:51.153 "nqn": "nqn.2016-06.io.spdk:cnode23466", 00:13:51.153 "min_cntlid": 6, 00:13:51.153 "max_cntlid": 5, 00:13:51.153 "method": "nvmf_create_subsystem", 00:13:51.153 "req_id": 1 00:13:51.153 } 00:13:51.153 Got JSON-RPC error response 00:13:51.153 response: 00:13:51.153 { 00:13:51.153 "code": -32602, 00:13:51.153 "message": "Invalid cntlid range [6-5]" 00:13:51.153 }' 00:13:51.153 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:51.153 { 00:13:51.153 "nqn": "nqn.2016-06.io.spdk:cnode23466", 00:13:51.153 "min_cntlid": 6, 00:13:51.153 "max_cntlid": 5, 00:13:51.153 "method": "nvmf_create_subsystem", 00:13:51.153 "req_id": 1 00:13:51.153 } 00:13:51.153 Got JSON-RPC error response 00:13:51.153 response: 00:13:51.153 { 00:13:51.153 "code": -32602, 00:13:51.153 "message": "Invalid cntlid range [6-5]" 00:13:51.153 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:51.153 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:51.153 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:51.153 { 00:13:51.153 "name": "foobar", 00:13:51.153 "method": "nvmf_delete_target", 00:13:51.153 "req_id": 1 00:13:51.153 } 00:13:51.153 Got JSON-RPC error response 00:13:51.153 response: 00:13:51.153 { 00:13:51.153 "code": -32602, 00:13:51.153 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:51.153 }' 00:13:51.153 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:51.154 { 00:13:51.154 "name": "foobar", 00:13:51.154 "method": "nvmf_delete_target", 00:13:51.154 "req_id": 1 00:13:51.154 } 00:13:51.154 Got JSON-RPC error response 00:13:51.154 response: 00:13:51.154 { 00:13:51.154 "code": -32602, 00:13:51.154 "message": "The specified target doesn't exist, cannot delete it." 00:13:51.154 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.154 rmmod nvme_tcp 00:13:51.154 rmmod nvme_fabrics 00:13:51.154 rmmod nvme_keyring 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1886713 ']' 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1886713 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1886713 ']' 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1886713 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.154 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1886713 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1886713' 00:13:51.415 killing process with pid 1886713 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1886713 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1886713 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.415 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.961 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.961 00:13:53.961 real 0m13.536s 00:13:53.961 user 0m18.829s 00:13:53.961 sys 0m6.618s 00:13:53.961 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.962 ************************************ 00:13:53.962 END TEST nvmf_invalid 00:13:53.962 ************************************ 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.962 ************************************ 00:13:53.962 START TEST nvmf_connect_stress 00:13:53.962 ************************************ 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:53.962 * Looking for test storage... 00:13:53.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:53.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.962 --rc genhtml_branch_coverage=1 00:13:53.962 --rc genhtml_function_coverage=1 00:13:53.962 --rc genhtml_legend=1 00:13:53.962 --rc geninfo_all_blocks=1 00:13:53.962 --rc geninfo_unexecuted_blocks=1 00:13:53.962 00:13:53.962 ' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:53.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.962 --rc genhtml_branch_coverage=1 00:13:53.962 --rc genhtml_function_coverage=1 00:13:53.962 --rc genhtml_legend=1 00:13:53.962 --rc geninfo_all_blocks=1 00:13:53.962 --rc geninfo_unexecuted_blocks=1 00:13:53.962 00:13:53.962 ' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:53.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.962 --rc genhtml_branch_coverage=1 00:13:53.962 --rc genhtml_function_coverage=1 00:13:53.962 --rc genhtml_legend=1 00:13:53.962 --rc geninfo_all_blocks=1 00:13:53.962 --rc geninfo_unexecuted_blocks=1 00:13:53.962 00:13:53.962 ' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:53.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.962 --rc genhtml_branch_coverage=1 00:13:53.962 --rc genhtml_function_coverage=1 00:13:53.962 --rc genhtml_legend=1 00:13:53.962 --rc geninfo_all_blocks=1 00:13:53.962 --rc geninfo_unexecuted_blocks=1 00:13:53.962 00:13:53.962 ' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:53.962 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:53.963 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.105 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:14:02.106 00:14:02.106 --- 10.0.0.2 ping statistics --- 00:14:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.106 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:02.106 00:14:02.106 --- 10.0.0.1 ping statistics --- 00:14:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.106 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1891738 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1891738 00:14:02.106 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:02.107 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1891738 ']' 00:14:02.107 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.107 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.107 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.107 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.107 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.107 [2024-11-20 16:56:53.475646] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:14:02.107 [2024-11-20 16:56:53.475713] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.107 [2024-11-20 16:56:53.577741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.107 [2024-11-20 16:56:53.628845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.107 [2024-11-20 16:56:53.628891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.107 [2024-11-20 16:56:53.628901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.107 [2024-11-20 16:56:53.628909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.107 [2024-11-20 16:56:53.628918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.107 [2024-11-20 16:56:53.631051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.107 [2024-11-20 16:56:53.631201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.107 [2024-11-20 16:56:53.631251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.369 [2024-11-20 16:56:54.350395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.369 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.370 [2024-11-20 16:56:54.376106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.370 NULL1 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1892068 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.370 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.942 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.942 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:02.942 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.942 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.942 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.203 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:03.203 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.203 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.203 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.464 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.464 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:03.464 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.464 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.464 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.725 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.725 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:03.725 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.725 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.725 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.986 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.986 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:03.987 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.987 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.987 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.555 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.555 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:04.555 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.555 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.555 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.814 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.814 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:04.814 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.814 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.814 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.073 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.073 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:05.074 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.074 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.074 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.333 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.333 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:05.333 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.333 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.333 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.592 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.592 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:05.592 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.592 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.592 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.161 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.161 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:06.161 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.161 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.161 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.421 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.421 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:06.421 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.421 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.421 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.680 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.680 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:06.680 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.680 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.680 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.940 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.940 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:06.940 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.940 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.940 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.213 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.213 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:07.213 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.213 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.213 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.787 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.787 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:07.787 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.787 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.787 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.046 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.046 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:08.046 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.046 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.046 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.306 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:08.306 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.306 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.306 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.567 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.567 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:08.567 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.567 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.567 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.827 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.827 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:08.827 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.827 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.827 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.397 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.397 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:09.397 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.397 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.398 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.657 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.657 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:09.657 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.657 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.657 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.917 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.917 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:09.917 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.917 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.917 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.178 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.178 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:10.178 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.178 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.178 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.747 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.747 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:10.747 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.748 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.748 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.007 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.007 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:11.007 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.007 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.007 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.266 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.266 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:11.266 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.266 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.266 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.524 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.525 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:11.525 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.525 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.525 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.785 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.785 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:11.785 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.785 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.785 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.355 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.355 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:12.355 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.355 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.355 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.616 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.616 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:12.616 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.616 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.616 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.616 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1892068 00:14:12.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1892068) - No such process 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1892068 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:12.876 rmmod nvme_tcp 00:14:12.876 rmmod nvme_fabrics 00:14:12.876 rmmod nvme_keyring 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1891738 ']' 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1891738 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1891738 ']' 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1891738 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.876 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1891738 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1891738' 00:14:13.136 killing process with pid 1891738 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1891738 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1891738 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.136 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:15.683 00:14:15.683 real 0m21.601s 00:14:15.683 user 0m43.072s 00:14:15.683 sys 0m9.489s 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 ************************************ 00:14:15.683 END TEST nvmf_connect_stress 00:14:15.683 ************************************ 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 ************************************ 00:14:15.683 START TEST nvmf_fused_ordering 00:14:15.683 ************************************ 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.683 * Looking for test storage... 00:14:15.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.683 --rc genhtml_branch_coverage=1 00:14:15.683 --rc genhtml_function_coverage=1 00:14:15.683 --rc genhtml_legend=1 00:14:15.683 --rc geninfo_all_blocks=1 00:14:15.683 --rc geninfo_unexecuted_blocks=1 00:14:15.683 00:14:15.683 ' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.683 --rc genhtml_branch_coverage=1 00:14:15.683 --rc genhtml_function_coverage=1 00:14:15.683 --rc genhtml_legend=1 00:14:15.683 --rc geninfo_all_blocks=1 00:14:15.683 --rc geninfo_unexecuted_blocks=1 00:14:15.683 00:14:15.683 ' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.683 --rc genhtml_branch_coverage=1 00:14:15.683 --rc genhtml_function_coverage=1 00:14:15.683 --rc genhtml_legend=1 00:14:15.683 --rc geninfo_all_blocks=1 00:14:15.683 --rc geninfo_unexecuted_blocks=1 00:14:15.683 00:14:15.683 ' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.683 --rc genhtml_branch_coverage=1 00:14:15.683 --rc genhtml_function_coverage=1 00:14:15.683 --rc genhtml_legend=1 00:14:15.683 --rc geninfo_all_blocks=1 00:14:15.683 --rc geninfo_unexecuted_blocks=1 00:14:15.683 00:14:15.683 ' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.683 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:15.684 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.828 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:23.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:23.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:23.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:23.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:23.829 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:23.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:14:23.829 00:14:23.829 --- 10.0.0.2 ping statistics --- 00:14:23.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.829 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:14:23.829 00:14:23.829 --- 10.0.0.1 ping statistics --- 00:14:23.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.829 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1898827 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1898827 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1898827 ']' 00:14:23.829 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.830 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.830 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.830 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.830 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [2024-11-20 16:57:15.208187] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:14:23.830 [2024-11-20 16:57:15.208255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.830 [2024-11-20 16:57:15.309404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.830 [2024-11-20 16:57:15.359991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.830 [2024-11-20 16:57:15.360041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.830 [2024-11-20 16:57:15.360050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.830 [2024-11-20 16:57:15.360057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.830 [2024-11-20 16:57:15.360064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.830 [2024-11-20 16:57:15.360831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 [2024-11-20 16:57:16.069716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 [2024-11-20 16:57:16.093974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 NULL1 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.091 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:24.091 [2024-11-20 16:57:16.162989] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:14:24.091 [2024-11-20 16:57:16.163034] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899043 ] 00:14:24.663 Attached to nqn.2016-06.io.spdk:cnode1 00:14:24.663 Namespace ID: 1 size: 1GB 00:14:24.663 fused_ordering(0) 00:14:24.663 fused_ordering(1) 00:14:24.663 fused_ordering(2) 00:14:24.663 fused_ordering(3) 00:14:24.663 fused_ordering(4) 00:14:24.663 fused_ordering(5) 00:14:24.663 fused_ordering(6) 00:14:24.663 fused_ordering(7) 00:14:24.663 fused_ordering(8) 00:14:24.663 fused_ordering(9) 00:14:24.663 fused_ordering(10) 00:14:24.663 fused_ordering(11) 00:14:24.663 fused_ordering(12) 00:14:24.663 fused_ordering(13) 00:14:24.663 fused_ordering(14) 00:14:24.663 fused_ordering(15) 00:14:24.663 fused_ordering(16) 00:14:24.663 fused_ordering(17) 00:14:24.663 fused_ordering(18) 00:14:24.663 fused_ordering(19) 00:14:24.663 fused_ordering(20) 00:14:24.663 fused_ordering(21) 00:14:24.663 fused_ordering(22) 00:14:24.663 fused_ordering(23) 00:14:24.663 fused_ordering(24) 00:14:24.663 fused_ordering(25) 00:14:24.663 fused_ordering(26) 00:14:24.663 fused_ordering(27) 00:14:24.663 fused_ordering(28) 00:14:24.663 fused_ordering(29) 00:14:24.663 fused_ordering(30) 00:14:24.663 fused_ordering(31) 00:14:24.663 fused_ordering(32) 00:14:24.663 fused_ordering(33) 00:14:24.663 fused_ordering(34) 00:14:24.663 fused_ordering(35) 00:14:24.663 fused_ordering(36) 00:14:24.663 fused_ordering(37) 00:14:24.663 fused_ordering(38) 00:14:24.663 fused_ordering(39) 00:14:24.664 fused_ordering(40) 00:14:24.664 fused_ordering(41) 00:14:24.664 fused_ordering(42) 00:14:24.664 fused_ordering(43) 00:14:24.664 fused_ordering(44) 00:14:24.664 fused_ordering(45) 00:14:24.664 fused_ordering(46) 00:14:24.664 fused_ordering(47) 00:14:24.664 fused_ordering(48) 00:14:24.664 fused_ordering(49) 00:14:24.664 fused_ordering(50) 00:14:24.664 fused_ordering(51) 00:14:24.664 fused_ordering(52) 00:14:24.664 fused_ordering(53) 00:14:24.664 fused_ordering(54) 00:14:24.664 fused_ordering(55) 00:14:24.664 fused_ordering(56) 00:14:24.664 fused_ordering(57) 00:14:24.664 fused_ordering(58) 00:14:24.664 fused_ordering(59) 00:14:24.664 fused_ordering(60) 00:14:24.664 fused_ordering(61) 00:14:24.664 fused_ordering(62) 00:14:24.664 fused_ordering(63) 00:14:24.664 fused_ordering(64) 00:14:24.664 fused_ordering(65) 00:14:24.664 fused_ordering(66) 00:14:24.664 fused_ordering(67) 00:14:24.664 fused_ordering(68) 00:14:24.664 fused_ordering(69) 00:14:24.664 fused_ordering(70) 00:14:24.664 fused_ordering(71) 00:14:24.664 fused_ordering(72) 00:14:24.664 fused_ordering(73) 00:14:24.664 fused_ordering(74) 00:14:24.664 fused_ordering(75) 00:14:24.664 fused_ordering(76) 00:14:24.664 fused_ordering(77) 00:14:24.664 fused_ordering(78) 00:14:24.664 fused_ordering(79) 00:14:24.664 fused_ordering(80) 00:14:24.664 fused_ordering(81) 00:14:24.664 fused_ordering(82) 00:14:24.664 fused_ordering(83) 00:14:24.664 fused_ordering(84) 00:14:24.664 fused_ordering(85) 00:14:24.664 fused_ordering(86) 00:14:24.664 fused_ordering(87) 00:14:24.664 fused_ordering(88) 00:14:24.664 fused_ordering(89) 00:14:24.664 fused_ordering(90) 00:14:24.664 fused_ordering(91) 00:14:24.664 fused_ordering(92) 00:14:24.664 fused_ordering(93) 00:14:24.664 fused_ordering(94) 00:14:24.664 fused_ordering(95) 00:14:24.664 fused_ordering(96) 00:14:24.664 fused_ordering(97) 00:14:24.664 fused_ordering(98) 00:14:24.664 fused_ordering(99) 00:14:24.664 fused_ordering(100) 00:14:24.664 fused_ordering(101) 00:14:24.664 fused_ordering(102) 00:14:24.664 fused_ordering(103) 00:14:24.664 fused_ordering(104) 00:14:24.664 fused_ordering(105) 00:14:24.664 fused_ordering(106) 00:14:24.664 fused_ordering(107) 00:14:24.664 fused_ordering(108) 00:14:24.664 fused_ordering(109) 00:14:24.664 fused_ordering(110) 00:14:24.664 fused_ordering(111) 00:14:24.664 fused_ordering(112) 00:14:24.664 fused_ordering(113) 00:14:24.664 fused_ordering(114) 00:14:24.664 fused_ordering(115) 00:14:24.664 fused_ordering(116) 00:14:24.664 fused_ordering(117) 00:14:24.664 fused_ordering(118) 00:14:24.664 fused_ordering(119) 00:14:24.664 fused_ordering(120) 00:14:24.664 fused_ordering(121) 00:14:24.664 fused_ordering(122) 00:14:24.664 fused_ordering(123) 00:14:24.664 fused_ordering(124) 00:14:24.664 fused_ordering(125) 00:14:24.664 fused_ordering(126) 00:14:24.664 fused_ordering(127) 00:14:24.664 fused_ordering(128) 00:14:24.664 fused_ordering(129) 00:14:24.664 fused_ordering(130) 00:14:24.664 fused_ordering(131) 00:14:24.664 fused_ordering(132) 00:14:24.664 fused_ordering(133) 00:14:24.664 fused_ordering(134) 00:14:24.664 fused_ordering(135) 00:14:24.664 fused_ordering(136) 00:14:24.664 fused_ordering(137) 00:14:24.664 fused_ordering(138) 00:14:24.664 fused_ordering(139) 00:14:24.664 fused_ordering(140) 00:14:24.664 fused_ordering(141) 00:14:24.664 fused_ordering(142) 00:14:24.664 fused_ordering(143) 00:14:24.664 fused_ordering(144) 00:14:24.664 fused_ordering(145) 00:14:24.664 fused_ordering(146) 00:14:24.664 fused_ordering(147) 00:14:24.664 fused_ordering(148) 00:14:24.664 fused_ordering(149) 00:14:24.664 fused_ordering(150) 00:14:24.664 fused_ordering(151) 00:14:24.664 fused_ordering(152) 00:14:24.664 fused_ordering(153) 00:14:24.664 fused_ordering(154) 00:14:24.664 fused_ordering(155) 00:14:24.664 fused_ordering(156) 00:14:24.664 fused_ordering(157) 00:14:24.664 fused_ordering(158) 00:14:24.664 fused_ordering(159) 00:14:24.664 fused_ordering(160) 00:14:24.664 fused_ordering(161) 00:14:24.664 fused_ordering(162) 00:14:24.664 fused_ordering(163) 00:14:24.664 fused_ordering(164) 00:14:24.664 fused_ordering(165) 00:14:24.664 fused_ordering(166) 00:14:24.664 fused_ordering(167) 00:14:24.664 fused_ordering(168) 00:14:24.664 fused_ordering(169) 00:14:24.664 fused_ordering(170) 00:14:24.664 fused_ordering(171) 00:14:24.664 fused_ordering(172) 00:14:24.664 fused_ordering(173) 00:14:24.664 fused_ordering(174) 00:14:24.664 fused_ordering(175) 00:14:24.664 fused_ordering(176) 00:14:24.664 fused_ordering(177) 00:14:24.664 fused_ordering(178) 00:14:24.664 fused_ordering(179) 00:14:24.664 fused_ordering(180) 00:14:24.664 fused_ordering(181) 00:14:24.664 fused_ordering(182) 00:14:24.664 fused_ordering(183) 00:14:24.664 fused_ordering(184) 00:14:24.664 fused_ordering(185) 00:14:24.664 fused_ordering(186) 00:14:24.664 fused_ordering(187) 00:14:24.664 fused_ordering(188) 00:14:24.664 fused_ordering(189) 00:14:24.664 fused_ordering(190) 00:14:24.664 fused_ordering(191) 00:14:24.664 fused_ordering(192) 00:14:24.664 fused_ordering(193) 00:14:24.664 fused_ordering(194) 00:14:24.664 fused_ordering(195) 00:14:24.664 fused_ordering(196) 00:14:24.664 fused_ordering(197) 00:14:24.664 fused_ordering(198) 00:14:24.664 fused_ordering(199) 00:14:24.664 fused_ordering(200) 00:14:24.664 fused_ordering(201) 00:14:24.664 fused_ordering(202) 00:14:24.664 fused_ordering(203) 00:14:24.664 fused_ordering(204) 00:14:24.664 fused_ordering(205) 00:14:24.926 fused_ordering(206) 00:14:24.926 fused_ordering(207) 00:14:24.926 fused_ordering(208) 00:14:24.926 fused_ordering(209) 00:14:24.926 fused_ordering(210) 00:14:24.926 fused_ordering(211) 00:14:24.926 fused_ordering(212) 00:14:24.926 fused_ordering(213) 00:14:24.926 fused_ordering(214) 00:14:24.926 fused_ordering(215) 00:14:24.926 fused_ordering(216) 00:14:24.926 fused_ordering(217) 00:14:24.926 fused_ordering(218) 00:14:24.926 fused_ordering(219) 00:14:24.926 fused_ordering(220) 00:14:24.926 fused_ordering(221) 00:14:24.926 fused_ordering(222) 00:14:24.926 fused_ordering(223) 00:14:24.926 fused_ordering(224) 00:14:24.926 fused_ordering(225) 00:14:24.926 fused_ordering(226) 00:14:24.926 fused_ordering(227) 00:14:24.926 fused_ordering(228) 00:14:24.926 fused_ordering(229) 00:14:24.926 fused_ordering(230) 00:14:24.926 fused_ordering(231) 00:14:24.926 fused_ordering(232) 00:14:24.926 fused_ordering(233) 00:14:24.926 fused_ordering(234) 00:14:24.926 fused_ordering(235) 00:14:24.926 fused_ordering(236) 00:14:24.926 fused_ordering(237) 00:14:24.926 fused_ordering(238) 00:14:24.926 fused_ordering(239) 00:14:24.926 fused_ordering(240) 00:14:24.926 fused_ordering(241) 00:14:24.926 fused_ordering(242) 00:14:24.926 fused_ordering(243) 00:14:24.926 fused_ordering(244) 00:14:24.926 fused_ordering(245) 00:14:24.926 fused_ordering(246) 00:14:24.926 fused_ordering(247) 00:14:24.926 fused_ordering(248) 00:14:24.926 fused_ordering(249) 00:14:24.926 fused_ordering(250) 00:14:24.926 fused_ordering(251) 00:14:24.926 fused_ordering(252) 00:14:24.926 fused_ordering(253) 00:14:24.926 fused_ordering(254) 00:14:24.926 fused_ordering(255) 00:14:24.926 fused_ordering(256) 00:14:24.926 fused_ordering(257) 00:14:24.926 fused_ordering(258) 00:14:24.926 fused_ordering(259) 00:14:24.926 fused_ordering(260) 00:14:24.926 fused_ordering(261) 00:14:24.926 fused_ordering(262) 00:14:24.926 fused_ordering(263) 00:14:24.926 fused_ordering(264) 00:14:24.926 fused_ordering(265) 00:14:24.926 fused_ordering(266) 00:14:24.926 fused_ordering(267) 00:14:24.926 fused_ordering(268) 00:14:24.926 fused_ordering(269) 00:14:24.926 fused_ordering(270) 00:14:24.926 fused_ordering(271) 00:14:24.926 fused_ordering(272) 00:14:24.926 fused_ordering(273) 00:14:24.926 fused_ordering(274) 00:14:24.926 fused_ordering(275) 00:14:24.926 fused_ordering(276) 00:14:24.926 fused_ordering(277) 00:14:24.926 fused_ordering(278) 00:14:24.926 fused_ordering(279) 00:14:24.926 fused_ordering(280) 00:14:24.926 fused_ordering(281) 00:14:24.926 fused_ordering(282) 00:14:24.926 fused_ordering(283) 00:14:24.926 fused_ordering(284) 00:14:24.926 fused_ordering(285) 00:14:24.926 fused_ordering(286) 00:14:24.926 fused_ordering(287) 00:14:24.926 fused_ordering(288) 00:14:24.926 fused_ordering(289) 00:14:24.926 fused_ordering(290) 00:14:24.926 fused_ordering(291) 00:14:24.926 fused_ordering(292) 00:14:24.926 fused_ordering(293) 00:14:24.926 fused_ordering(294) 00:14:24.926 fused_ordering(295) 00:14:24.926 fused_ordering(296) 00:14:24.926 fused_ordering(297) 00:14:24.926 fused_ordering(298) 00:14:24.926 fused_ordering(299) 00:14:24.926 fused_ordering(300) 00:14:24.926 fused_ordering(301) 00:14:24.926 fused_ordering(302) 00:14:24.926 fused_ordering(303) 00:14:24.926 fused_ordering(304) 00:14:24.926 fused_ordering(305) 00:14:24.926 fused_ordering(306) 00:14:24.926 fused_ordering(307) 00:14:24.926 fused_ordering(308) 00:14:24.926 fused_ordering(309) 00:14:24.926 fused_ordering(310) 00:14:24.926 fused_ordering(311) 00:14:24.926 fused_ordering(312) 00:14:24.926 fused_ordering(313) 00:14:24.926 fused_ordering(314) 00:14:24.926 fused_ordering(315) 00:14:24.926 fused_ordering(316) 00:14:24.926 fused_ordering(317) 00:14:24.926 fused_ordering(318) 00:14:24.926 fused_ordering(319) 00:14:24.926 fused_ordering(320) 00:14:24.926 fused_ordering(321) 00:14:24.926 fused_ordering(322) 00:14:24.926 fused_ordering(323) 00:14:24.926 fused_ordering(324) 00:14:24.926 fused_ordering(325) 00:14:24.926 fused_ordering(326) 00:14:24.926 fused_ordering(327) 00:14:24.926 fused_ordering(328) 00:14:24.926 fused_ordering(329) 00:14:24.926 fused_ordering(330) 00:14:24.926 fused_ordering(331) 00:14:24.926 fused_ordering(332) 00:14:24.926 fused_ordering(333) 00:14:24.926 fused_ordering(334) 00:14:24.926 fused_ordering(335) 00:14:24.926 fused_ordering(336) 00:14:24.926 fused_ordering(337) 00:14:24.926 fused_ordering(338) 00:14:24.926 fused_ordering(339) 00:14:24.926 fused_ordering(340) 00:14:24.926 fused_ordering(341) 00:14:24.926 fused_ordering(342) 00:14:24.926 fused_ordering(343) 00:14:24.926 fused_ordering(344) 00:14:24.926 fused_ordering(345) 00:14:24.926 fused_ordering(346) 00:14:24.926 fused_ordering(347) 00:14:24.926 fused_ordering(348) 00:14:24.926 fused_ordering(349) 00:14:24.926 fused_ordering(350) 00:14:24.926 fused_ordering(351) 00:14:24.926 fused_ordering(352) 00:14:24.926 fused_ordering(353) 00:14:24.926 fused_ordering(354) 00:14:24.926 fused_ordering(355) 00:14:24.926 fused_ordering(356) 00:14:24.926 fused_ordering(357) 00:14:24.926 fused_ordering(358) 00:14:24.926 fused_ordering(359) 00:14:24.926 fused_ordering(360) 00:14:24.926 fused_ordering(361) 00:14:24.926 fused_ordering(362) 00:14:24.926 fused_ordering(363) 00:14:24.926 fused_ordering(364) 00:14:24.926 fused_ordering(365) 00:14:24.926 fused_ordering(366) 00:14:24.926 fused_ordering(367) 00:14:24.926 fused_ordering(368) 00:14:24.926 fused_ordering(369) 00:14:24.926 fused_ordering(370) 00:14:24.926 fused_ordering(371) 00:14:24.926 fused_ordering(372) 00:14:24.926 fused_ordering(373) 00:14:24.926 fused_ordering(374) 00:14:24.926 fused_ordering(375) 00:14:24.926 fused_ordering(376) 00:14:24.926 fused_ordering(377) 00:14:24.926 fused_ordering(378) 00:14:24.926 fused_ordering(379) 00:14:24.926 fused_ordering(380) 00:14:24.926 fused_ordering(381) 00:14:24.926 fused_ordering(382) 00:14:24.926 fused_ordering(383) 00:14:24.926 fused_ordering(384) 00:14:24.926 fused_ordering(385) 00:14:24.926 fused_ordering(386) 00:14:24.926 fused_ordering(387) 00:14:24.926 fused_ordering(388) 00:14:24.926 fused_ordering(389) 00:14:24.926 fused_ordering(390) 00:14:24.926 fused_ordering(391) 00:14:24.926 fused_ordering(392) 00:14:24.926 fused_ordering(393) 00:14:24.926 fused_ordering(394) 00:14:24.926 fused_ordering(395) 00:14:24.926 fused_ordering(396) 00:14:24.926 fused_ordering(397) 00:14:24.926 fused_ordering(398) 00:14:24.926 fused_ordering(399) 00:14:24.926 fused_ordering(400) 00:14:24.926 fused_ordering(401) 00:14:24.926 fused_ordering(402) 00:14:24.926 fused_ordering(403) 00:14:24.926 fused_ordering(404) 00:14:24.926 fused_ordering(405) 00:14:24.926 fused_ordering(406) 00:14:24.926 fused_ordering(407) 00:14:24.926 fused_ordering(408) 00:14:24.926 fused_ordering(409) 00:14:24.926 fused_ordering(410) 00:14:25.497 fused_ordering(411) 00:14:25.497 fused_ordering(412) 00:14:25.497 fused_ordering(413) 00:14:25.498 fused_ordering(414) 00:14:25.498 fused_ordering(415) 00:14:25.498 fused_ordering(416) 00:14:25.498 fused_ordering(417) 00:14:25.498 fused_ordering(418) 00:14:25.498 fused_ordering(419) 00:14:25.498 fused_ordering(420) 00:14:25.498 fused_ordering(421) 00:14:25.498 fused_ordering(422) 00:14:25.498 fused_ordering(423) 00:14:25.498 fused_ordering(424) 00:14:25.498 fused_ordering(425) 00:14:25.498 fused_ordering(426) 00:14:25.498 fused_ordering(427) 00:14:25.498 fused_ordering(428) 00:14:25.498 fused_ordering(429) 00:14:25.498 fused_ordering(430) 00:14:25.498 fused_ordering(431) 00:14:25.498 fused_ordering(432) 00:14:25.498 fused_ordering(433) 00:14:25.498 fused_ordering(434) 00:14:25.498 fused_ordering(435) 00:14:25.498 fused_ordering(436) 00:14:25.498 fused_ordering(437) 00:14:25.498 fused_ordering(438) 00:14:25.498 fused_ordering(439) 00:14:25.498 fused_ordering(440) 00:14:25.498 fused_ordering(441) 00:14:25.498 fused_ordering(442) 00:14:25.498 fused_ordering(443) 00:14:25.498 fused_ordering(444) 00:14:25.498 fused_ordering(445) 00:14:25.498 fused_ordering(446) 00:14:25.498 fused_ordering(447) 00:14:25.498 fused_ordering(448) 00:14:25.498 fused_ordering(449) 00:14:25.498 fused_ordering(450) 00:14:25.498 fused_ordering(451) 00:14:25.498 fused_ordering(452) 00:14:25.498 fused_ordering(453) 00:14:25.498 fused_ordering(454) 00:14:25.498 fused_ordering(455) 00:14:25.498 fused_ordering(456) 00:14:25.498 fused_ordering(457) 00:14:25.498 fused_ordering(458) 00:14:25.498 fused_ordering(459) 00:14:25.498 fused_ordering(460) 00:14:25.498 fused_ordering(461) 00:14:25.498 fused_ordering(462) 00:14:25.498 fused_ordering(463) 00:14:25.498 fused_ordering(464) 00:14:25.498 fused_ordering(465) 00:14:25.498 fused_ordering(466) 00:14:25.498 fused_ordering(467) 00:14:25.498 fused_ordering(468) 00:14:25.498 fused_ordering(469) 00:14:25.498 fused_ordering(470) 00:14:25.498 fused_ordering(471) 00:14:25.498 fused_ordering(472) 00:14:25.498 fused_ordering(473) 00:14:25.498 fused_ordering(474) 00:14:25.498 fused_ordering(475) 00:14:25.498 fused_ordering(476) 00:14:25.498 fused_ordering(477) 00:14:25.498 fused_ordering(478) 00:14:25.498 fused_ordering(479) 00:14:25.498 fused_ordering(480) 00:14:25.498 fused_ordering(481) 00:14:25.498 fused_ordering(482) 00:14:25.498 fused_ordering(483) 00:14:25.498 fused_ordering(484) 00:14:25.498 fused_ordering(485) 00:14:25.498 fused_ordering(486) 00:14:25.498 fused_ordering(487) 00:14:25.498 fused_ordering(488) 00:14:25.498 fused_ordering(489) 00:14:25.498 fused_ordering(490) 00:14:25.498 fused_ordering(491) 00:14:25.498 fused_ordering(492) 00:14:25.498 fused_ordering(493) 00:14:25.498 fused_ordering(494) 00:14:25.498 fused_ordering(495) 00:14:25.498 fused_ordering(496) 00:14:25.498 fused_ordering(497) 00:14:25.498 fused_ordering(498) 00:14:25.498 fused_ordering(499) 00:14:25.498 fused_ordering(500) 00:14:25.498 fused_ordering(501) 00:14:25.498 fused_ordering(502) 00:14:25.498 fused_ordering(503) 00:14:25.498 fused_ordering(504) 00:14:25.498 fused_ordering(505) 00:14:25.498 fused_ordering(506) 00:14:25.498 fused_ordering(507) 00:14:25.498 fused_ordering(508) 00:14:25.498 fused_ordering(509) 00:14:25.498 fused_ordering(510) 00:14:25.498 fused_ordering(511) 00:14:25.498 fused_ordering(512) 00:14:25.498 fused_ordering(513) 00:14:25.498 fused_ordering(514) 00:14:25.498 fused_ordering(515) 00:14:25.498 fused_ordering(516) 00:14:25.498 fused_ordering(517) 00:14:25.498 fused_ordering(518) 00:14:25.498 fused_ordering(519) 00:14:25.498 fused_ordering(520) 00:14:25.498 fused_ordering(521) 00:14:25.498 fused_ordering(522) 00:14:25.498 fused_ordering(523) 00:14:25.498 fused_ordering(524) 00:14:25.498 fused_ordering(525) 00:14:25.498 fused_ordering(526) 00:14:25.498 fused_ordering(527) 00:14:25.498 fused_ordering(528) 00:14:25.498 fused_ordering(529) 00:14:25.498 fused_ordering(530) 00:14:25.498 fused_ordering(531) 00:14:25.498 fused_ordering(532) 00:14:25.498 fused_ordering(533) 00:14:25.498 fused_ordering(534) 00:14:25.498 fused_ordering(535) 00:14:25.498 fused_ordering(536) 00:14:25.498 fused_ordering(537) 00:14:25.498 fused_ordering(538) 00:14:25.498 fused_ordering(539) 00:14:25.498 fused_ordering(540) 00:14:25.498 fused_ordering(541) 00:14:25.498 fused_ordering(542) 00:14:25.498 fused_ordering(543) 00:14:25.498 fused_ordering(544) 00:14:25.498 fused_ordering(545) 00:14:25.498 fused_ordering(546) 00:14:25.498 fused_ordering(547) 00:14:25.498 fused_ordering(548) 00:14:25.498 fused_ordering(549) 00:14:25.498 fused_ordering(550) 00:14:25.498 fused_ordering(551) 00:14:25.498 fused_ordering(552) 00:14:25.498 fused_ordering(553) 00:14:25.498 fused_ordering(554) 00:14:25.498 fused_ordering(555) 00:14:25.498 fused_ordering(556) 00:14:25.498 fused_ordering(557) 00:14:25.498 fused_ordering(558) 00:14:25.498 fused_ordering(559) 00:14:25.498 fused_ordering(560) 00:14:25.498 fused_ordering(561) 00:14:25.498 fused_ordering(562) 00:14:25.498 fused_ordering(563) 00:14:25.498 fused_ordering(564) 00:14:25.498 fused_ordering(565) 00:14:25.498 fused_ordering(566) 00:14:25.498 fused_ordering(567) 00:14:25.498 fused_ordering(568) 00:14:25.498 fused_ordering(569) 00:14:25.498 fused_ordering(570) 00:14:25.498 fused_ordering(571) 00:14:25.498 fused_ordering(572) 00:14:25.498 fused_ordering(573) 00:14:25.498 fused_ordering(574) 00:14:25.498 fused_ordering(575) 00:14:25.498 fused_ordering(576) 00:14:25.498 fused_ordering(577) 00:14:25.498 fused_ordering(578) 00:14:25.498 fused_ordering(579) 00:14:25.498 fused_ordering(580) 00:14:25.498 fused_ordering(581) 00:14:25.498 fused_ordering(582) 00:14:25.498 fused_ordering(583) 00:14:25.498 fused_ordering(584) 00:14:25.498 fused_ordering(585) 00:14:25.498 fused_ordering(586) 00:14:25.498 fused_ordering(587) 00:14:25.498 fused_ordering(588) 00:14:25.498 fused_ordering(589) 00:14:25.498 fused_ordering(590) 00:14:25.498 fused_ordering(591) 00:14:25.498 fused_ordering(592) 00:14:25.498 fused_ordering(593) 00:14:25.498 fused_ordering(594) 00:14:25.498 fused_ordering(595) 00:14:25.498 fused_ordering(596) 00:14:25.498 fused_ordering(597) 00:14:25.498 fused_ordering(598) 00:14:25.498 fused_ordering(599) 00:14:25.498 fused_ordering(600) 00:14:25.498 fused_ordering(601) 00:14:25.498 fused_ordering(602) 00:14:25.498 fused_ordering(603) 00:14:25.498 fused_ordering(604) 00:14:25.498 fused_ordering(605) 00:14:25.498 fused_ordering(606) 00:14:25.498 fused_ordering(607) 00:14:25.498 fused_ordering(608) 00:14:25.498 fused_ordering(609) 00:14:25.498 fused_ordering(610) 00:14:25.498 fused_ordering(611) 00:14:25.498 fused_ordering(612) 00:14:25.498 fused_ordering(613) 00:14:25.498 fused_ordering(614) 00:14:25.498 fused_ordering(615) 00:14:26.070 fused_ordering(616) 00:14:26.070 fused_ordering(617) 00:14:26.070 fused_ordering(618) 00:14:26.070 fused_ordering(619) 00:14:26.070 fused_ordering(620) 00:14:26.070 fused_ordering(621) 00:14:26.070 fused_ordering(622) 00:14:26.070 fused_ordering(623) 00:14:26.070 fused_ordering(624) 00:14:26.070 fused_ordering(625) 00:14:26.070 fused_ordering(626) 00:14:26.070 fused_ordering(627) 00:14:26.070 fused_ordering(628) 00:14:26.070 fused_ordering(629) 00:14:26.070 fused_ordering(630) 00:14:26.070 fused_ordering(631) 00:14:26.070 fused_ordering(632) 00:14:26.070 fused_ordering(633) 00:14:26.070 fused_ordering(634) 00:14:26.070 fused_ordering(635) 00:14:26.070 fused_ordering(636) 00:14:26.070 fused_ordering(637) 00:14:26.070 fused_ordering(638) 00:14:26.070 fused_ordering(639) 00:14:26.070 fused_ordering(640) 00:14:26.070 fused_ordering(641) 00:14:26.070 fused_ordering(642) 00:14:26.070 fused_ordering(643) 00:14:26.070 fused_ordering(644) 00:14:26.070 fused_ordering(645) 00:14:26.070 fused_ordering(646) 00:14:26.070 fused_ordering(647) 00:14:26.070 fused_ordering(648) 00:14:26.070 fused_ordering(649) 00:14:26.070 fused_ordering(650) 00:14:26.070 fused_ordering(651) 00:14:26.070 fused_ordering(652) 00:14:26.070 fused_ordering(653) 00:14:26.070 fused_ordering(654) 00:14:26.070 fused_ordering(655) 00:14:26.070 fused_ordering(656) 00:14:26.070 fused_ordering(657) 00:14:26.070 fused_ordering(658) 00:14:26.070 fused_ordering(659) 00:14:26.070 fused_ordering(660) 00:14:26.070 fused_ordering(661) 00:14:26.070 fused_ordering(662) 00:14:26.070 fused_ordering(663) 00:14:26.070 fused_ordering(664) 00:14:26.070 fused_ordering(665) 00:14:26.070 fused_ordering(666) 00:14:26.070 fused_ordering(667) 00:14:26.070 fused_ordering(668) 00:14:26.070 fused_ordering(669) 00:14:26.070 fused_ordering(670) 00:14:26.070 fused_ordering(671) 00:14:26.070 fused_ordering(672) 00:14:26.070 fused_ordering(673) 00:14:26.070 fused_ordering(674) 00:14:26.070 fused_ordering(675) 00:14:26.070 fused_ordering(676) 00:14:26.070 fused_ordering(677) 00:14:26.070 fused_ordering(678) 00:14:26.070 fused_ordering(679) 00:14:26.070 fused_ordering(680) 00:14:26.070 fused_ordering(681) 00:14:26.070 fused_ordering(682) 00:14:26.070 fused_ordering(683) 00:14:26.070 fused_ordering(684) 00:14:26.070 fused_ordering(685) 00:14:26.070 fused_ordering(686) 00:14:26.070 fused_ordering(687) 00:14:26.070 fused_ordering(688) 00:14:26.070 fused_ordering(689) 00:14:26.070 fused_ordering(690) 00:14:26.070 fused_ordering(691) 00:14:26.070 fused_ordering(692) 00:14:26.070 fused_ordering(693) 00:14:26.070 fused_ordering(694) 00:14:26.070 fused_ordering(695) 00:14:26.070 fused_ordering(696) 00:14:26.070 fused_ordering(697) 00:14:26.070 fused_ordering(698) 00:14:26.070 fused_ordering(699) 00:14:26.070 fused_ordering(700) 00:14:26.070 fused_ordering(701) 00:14:26.070 fused_ordering(702) 00:14:26.070 fused_ordering(703) 00:14:26.070 fused_ordering(704) 00:14:26.070 fused_ordering(705) 00:14:26.070 fused_ordering(706) 00:14:26.070 fused_ordering(707) 00:14:26.070 fused_ordering(708) 00:14:26.070 fused_ordering(709) 00:14:26.070 fused_ordering(710) 00:14:26.070 fused_ordering(711) 00:14:26.070 fused_ordering(712) 00:14:26.070 fused_ordering(713) 00:14:26.070 fused_ordering(714) 00:14:26.070 fused_ordering(715) 00:14:26.070 fused_ordering(716) 00:14:26.070 fused_ordering(717) 00:14:26.070 fused_ordering(718) 00:14:26.070 fused_ordering(719) 00:14:26.070 fused_ordering(720) 00:14:26.070 fused_ordering(721) 00:14:26.070 fused_ordering(722) 00:14:26.070 fused_ordering(723) 00:14:26.070 fused_ordering(724) 00:14:26.070 fused_ordering(725) 00:14:26.070 fused_ordering(726) 00:14:26.070 fused_ordering(727) 00:14:26.070 fused_ordering(728) 00:14:26.070 fused_ordering(729) 00:14:26.070 fused_ordering(730) 00:14:26.070 fused_ordering(731) 00:14:26.070 fused_ordering(732) 00:14:26.070 fused_ordering(733) 00:14:26.070 fused_ordering(734) 00:14:26.070 fused_ordering(735) 00:14:26.070 fused_ordering(736) 00:14:26.070 fused_ordering(737) 00:14:26.070 fused_ordering(738) 00:14:26.070 fused_ordering(739) 00:14:26.070 fused_ordering(740) 00:14:26.070 fused_ordering(741) 00:14:26.070 fused_ordering(742) 00:14:26.070 fused_ordering(743) 00:14:26.070 fused_ordering(744) 00:14:26.070 fused_ordering(745) 00:14:26.070 fused_ordering(746) 00:14:26.070 fused_ordering(747) 00:14:26.070 fused_ordering(748) 00:14:26.070 fused_ordering(749) 00:14:26.070 fused_ordering(750) 00:14:26.070 fused_ordering(751) 00:14:26.070 fused_ordering(752) 00:14:26.070 fused_ordering(753) 00:14:26.070 fused_ordering(754) 00:14:26.070 fused_ordering(755) 00:14:26.070 fused_ordering(756) 00:14:26.070 fused_ordering(757) 00:14:26.070 fused_ordering(758) 00:14:26.070 fused_ordering(759) 00:14:26.070 fused_ordering(760) 00:14:26.070 fused_ordering(761) 00:14:26.070 fused_ordering(762) 00:14:26.070 fused_ordering(763) 00:14:26.070 fused_ordering(764) 00:14:26.070 fused_ordering(765) 00:14:26.070 fused_ordering(766) 00:14:26.070 fused_ordering(767) 00:14:26.070 fused_ordering(768) 00:14:26.070 fused_ordering(769) 00:14:26.070 fused_ordering(770) 00:14:26.070 fused_ordering(771) 00:14:26.070 fused_ordering(772) 00:14:26.070 fused_ordering(773) 00:14:26.070 fused_ordering(774) 00:14:26.070 fused_ordering(775) 00:14:26.070 fused_ordering(776) 00:14:26.070 fused_ordering(777) 00:14:26.070 fused_ordering(778) 00:14:26.070 fused_ordering(779) 00:14:26.070 fused_ordering(780) 00:14:26.070 fused_ordering(781) 00:14:26.070 fused_ordering(782) 00:14:26.070 fused_ordering(783) 00:14:26.070 fused_ordering(784) 00:14:26.070 fused_ordering(785) 00:14:26.070 fused_ordering(786) 00:14:26.070 fused_ordering(787) 00:14:26.070 fused_ordering(788) 00:14:26.070 fused_ordering(789) 00:14:26.070 fused_ordering(790) 00:14:26.070 fused_ordering(791) 00:14:26.070 fused_ordering(792) 00:14:26.070 fused_ordering(793) 00:14:26.070 fused_ordering(794) 00:14:26.070 fused_ordering(795) 00:14:26.070 fused_ordering(796) 00:14:26.070 fused_ordering(797) 00:14:26.070 fused_ordering(798) 00:14:26.070 fused_ordering(799) 00:14:26.070 fused_ordering(800) 00:14:26.070 fused_ordering(801) 00:14:26.070 fused_ordering(802) 00:14:26.070 fused_ordering(803) 00:14:26.071 fused_ordering(804) 00:14:26.071 fused_ordering(805) 00:14:26.071 fused_ordering(806) 00:14:26.071 fused_ordering(807) 00:14:26.071 fused_ordering(808) 00:14:26.071 fused_ordering(809) 00:14:26.071 fused_ordering(810) 00:14:26.071 fused_ordering(811) 00:14:26.071 fused_ordering(812) 00:14:26.071 fused_ordering(813) 00:14:26.071 fused_ordering(814) 00:14:26.071 fused_ordering(815) 00:14:26.071 fused_ordering(816) 00:14:26.071 fused_ordering(817) 00:14:26.071 fused_ordering(818) 00:14:26.071 fused_ordering(819) 00:14:26.071 fused_ordering(820) 00:14:26.642 fused_ordering(821) 00:14:26.642 fused_ordering(822) 00:14:26.642 fused_ordering(823) 00:14:26.642 fused_ordering(824) 00:14:26.642 fused_ordering(825) 00:14:26.642 fused_ordering(826) 00:14:26.642 fused_ordering(827) 00:14:26.642 fused_ordering(828) 00:14:26.642 fused_ordering(829) 00:14:26.642 fused_ordering(830) 00:14:26.642 fused_ordering(831) 00:14:26.642 fused_ordering(832) 00:14:26.642 fused_ordering(833) 00:14:26.642 fused_ordering(834) 00:14:26.642 fused_ordering(835) 00:14:26.642 fused_ordering(836) 00:14:26.642 fused_ordering(837) 00:14:26.642 fused_ordering(838) 00:14:26.642 fused_ordering(839) 00:14:26.642 fused_ordering(840) 00:14:26.642 fused_ordering(841) 00:14:26.642 fused_ordering(842) 00:14:26.642 fused_ordering(843) 00:14:26.642 fused_ordering(844) 00:14:26.642 fused_ordering(845) 00:14:26.642 fused_ordering(846) 00:14:26.642 fused_ordering(847) 00:14:26.642 fused_ordering(848) 00:14:26.642 fused_ordering(849) 00:14:26.642 fused_ordering(850) 00:14:26.642 fused_ordering(851) 00:14:26.642 fused_ordering(852) 00:14:26.642 fused_ordering(853) 00:14:26.642 fused_ordering(854) 00:14:26.642 fused_ordering(855) 00:14:26.642 fused_ordering(856) 00:14:26.642 fused_ordering(857) 00:14:26.642 fused_ordering(858) 00:14:26.642 fused_ordering(859) 00:14:26.642 fused_ordering(860) 00:14:26.642 fused_ordering(861) 00:14:26.642 fused_ordering(862) 00:14:26.642 fused_ordering(863) 00:14:26.642 fused_ordering(864) 00:14:26.642 fused_ordering(865) 00:14:26.642 fused_ordering(866) 00:14:26.642 fused_ordering(867) 00:14:26.642 fused_ordering(868) 00:14:26.642 fused_ordering(869) 00:14:26.642 fused_ordering(870) 00:14:26.642 fused_ordering(871) 00:14:26.642 fused_ordering(872) 00:14:26.642 fused_ordering(873) 00:14:26.642 fused_ordering(874) 00:14:26.642 fused_ordering(875) 00:14:26.642 fused_ordering(876) 00:14:26.642 fused_ordering(877) 00:14:26.642 fused_ordering(878) 00:14:26.642 fused_ordering(879) 00:14:26.642 fused_ordering(880) 00:14:26.642 fused_ordering(881) 00:14:26.642 fused_ordering(882) 00:14:26.642 fused_ordering(883) 00:14:26.642 fused_ordering(884) 00:14:26.642 fused_ordering(885) 00:14:26.642 fused_ordering(886) 00:14:26.642 fused_ordering(887) 00:14:26.642 fused_ordering(888) 00:14:26.642 fused_ordering(889) 00:14:26.642 fused_ordering(890) 00:14:26.642 fused_ordering(891) 00:14:26.642 fused_ordering(892) 00:14:26.642 fused_ordering(893) 00:14:26.642 fused_ordering(894) 00:14:26.642 fused_ordering(895) 00:14:26.642 fused_ordering(896) 00:14:26.642 fused_ordering(897) 00:14:26.642 fused_ordering(898) 00:14:26.642 fused_ordering(899) 00:14:26.642 fused_ordering(900) 00:14:26.642 fused_ordering(901) 00:14:26.642 fused_ordering(902) 00:14:26.642 fused_ordering(903) 00:14:26.642 fused_ordering(904) 00:14:26.642 fused_ordering(905) 00:14:26.642 fused_ordering(906) 00:14:26.642 fused_ordering(907) 00:14:26.642 fused_ordering(908) 00:14:26.642 fused_ordering(909) 00:14:26.642 fused_ordering(910) 00:14:26.642 fused_ordering(911) 00:14:26.642 fused_ordering(912) 00:14:26.642 fused_ordering(913) 00:14:26.642 fused_ordering(914) 00:14:26.642 fused_ordering(915) 00:14:26.642 fused_ordering(916) 00:14:26.642 fused_ordering(917) 00:14:26.642 fused_ordering(918) 00:14:26.642 fused_ordering(919) 00:14:26.642 fused_ordering(920) 00:14:26.642 fused_ordering(921) 00:14:26.642 fused_ordering(922) 00:14:26.642 fused_ordering(923) 00:14:26.642 fused_ordering(924) 00:14:26.642 fused_ordering(925) 00:14:26.642 fused_ordering(926) 00:14:26.642 fused_ordering(927) 00:14:26.642 fused_ordering(928) 00:14:26.642 fused_ordering(929) 00:14:26.642 fused_ordering(930) 00:14:26.642 fused_ordering(931) 00:14:26.642 fused_ordering(932) 00:14:26.642 fused_ordering(933) 00:14:26.642 fused_ordering(934) 00:14:26.642 fused_ordering(935) 00:14:26.642 fused_ordering(936) 00:14:26.642 fused_ordering(937) 00:14:26.642 fused_ordering(938) 00:14:26.642 fused_ordering(939) 00:14:26.642 fused_ordering(940) 00:14:26.642 fused_ordering(941) 00:14:26.642 fused_ordering(942) 00:14:26.642 fused_ordering(943) 00:14:26.642 fused_ordering(944) 00:14:26.642 fused_ordering(945) 00:14:26.642 fused_ordering(946) 00:14:26.642 fused_ordering(947) 00:14:26.642 fused_ordering(948) 00:14:26.642 fused_ordering(949) 00:14:26.642 fused_ordering(950) 00:14:26.642 fused_ordering(951) 00:14:26.642 fused_ordering(952) 00:14:26.642 fused_ordering(953) 00:14:26.642 fused_ordering(954) 00:14:26.642 fused_ordering(955) 00:14:26.642 fused_ordering(956) 00:14:26.642 fused_ordering(957) 00:14:26.642 fused_ordering(958) 00:14:26.642 fused_ordering(959) 00:14:26.642 fused_ordering(960) 00:14:26.642 fused_ordering(961) 00:14:26.642 fused_ordering(962) 00:14:26.642 fused_ordering(963) 00:14:26.642 fused_ordering(964) 00:14:26.642 fused_ordering(965) 00:14:26.642 fused_ordering(966) 00:14:26.642 fused_ordering(967) 00:14:26.642 fused_ordering(968) 00:14:26.642 fused_ordering(969) 00:14:26.642 fused_ordering(970) 00:14:26.642 fused_ordering(971) 00:14:26.642 fused_ordering(972) 00:14:26.642 fused_ordering(973) 00:14:26.642 fused_ordering(974) 00:14:26.642 fused_ordering(975) 00:14:26.642 fused_ordering(976) 00:14:26.642 fused_ordering(977) 00:14:26.642 fused_ordering(978) 00:14:26.642 fused_ordering(979) 00:14:26.642 fused_ordering(980) 00:14:26.642 fused_ordering(981) 00:14:26.642 fused_ordering(982) 00:14:26.642 fused_ordering(983) 00:14:26.642 fused_ordering(984) 00:14:26.642 fused_ordering(985) 00:14:26.642 fused_ordering(986) 00:14:26.642 fused_ordering(987) 00:14:26.642 fused_ordering(988) 00:14:26.642 fused_ordering(989) 00:14:26.642 fused_ordering(990) 00:14:26.642 fused_ordering(991) 00:14:26.642 fused_ordering(992) 00:14:26.643 fused_ordering(993) 00:14:26.643 fused_ordering(994) 00:14:26.643 fused_ordering(995) 00:14:26.643 fused_ordering(996) 00:14:26.643 fused_ordering(997) 00:14:26.643 fused_ordering(998) 00:14:26.643 fused_ordering(999) 00:14:26.643 fused_ordering(1000) 00:14:26.643 fused_ordering(1001) 00:14:26.643 fused_ordering(1002) 00:14:26.643 fused_ordering(1003) 00:14:26.643 fused_ordering(1004) 00:14:26.643 fused_ordering(1005) 00:14:26.643 fused_ordering(1006) 00:14:26.643 fused_ordering(1007) 00:14:26.643 fused_ordering(1008) 00:14:26.643 fused_ordering(1009) 00:14:26.643 fused_ordering(1010) 00:14:26.643 fused_ordering(1011) 00:14:26.643 fused_ordering(1012) 00:14:26.643 fused_ordering(1013) 00:14:26.643 fused_ordering(1014) 00:14:26.643 fused_ordering(1015) 00:14:26.643 fused_ordering(1016) 00:14:26.643 fused_ordering(1017) 00:14:26.643 fused_ordering(1018) 00:14:26.643 fused_ordering(1019) 00:14:26.643 fused_ordering(1020) 00:14:26.643 fused_ordering(1021) 00:14:26.643 fused_ordering(1022) 00:14:26.643 fused_ordering(1023) 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:26.643 rmmod nvme_tcp 00:14:26.643 rmmod nvme_fabrics 00:14:26.643 rmmod nvme_keyring 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1898827 ']' 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1898827 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1898827 ']' 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1898827 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898827 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898827' 00:14:26.643 killing process with pid 1898827 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1898827 00:14:26.643 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1898827 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.903 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:28.991 00:14:28.991 real 0m13.581s 00:14:28.991 user 0m7.183s 00:14:28.991 sys 0m7.265s 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:28.991 ************************************ 00:14:28.991 END TEST nvmf_fused_ordering 00:14:28.991 ************************************ 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.991 ************************************ 00:14:28.991 START TEST nvmf_ns_masking 00:14:28.991 ************************************ 00:14:28.991 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:28.991 * Looking for test storage... 00:14:28.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.991 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.991 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.991 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:29.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.253 --rc genhtml_branch_coverage=1 00:14:29.253 --rc genhtml_function_coverage=1 00:14:29.253 --rc genhtml_legend=1 00:14:29.253 --rc geninfo_all_blocks=1 00:14:29.253 --rc geninfo_unexecuted_blocks=1 00:14:29.253 00:14:29.253 ' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:29.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.253 --rc genhtml_branch_coverage=1 00:14:29.253 --rc genhtml_function_coverage=1 00:14:29.253 --rc genhtml_legend=1 00:14:29.253 --rc geninfo_all_blocks=1 00:14:29.253 --rc geninfo_unexecuted_blocks=1 00:14:29.253 00:14:29.253 ' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:29.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.253 --rc genhtml_branch_coverage=1 00:14:29.253 --rc genhtml_function_coverage=1 00:14:29.253 --rc genhtml_legend=1 00:14:29.253 --rc geninfo_all_blocks=1 00:14:29.253 --rc geninfo_unexecuted_blocks=1 00:14:29.253 00:14:29.253 ' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:29.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.253 --rc genhtml_branch_coverage=1 00:14:29.253 --rc genhtml_function_coverage=1 00:14:29.253 --rc genhtml_legend=1 00:14:29.253 --rc geninfo_all_blocks=1 00:14:29.253 --rc geninfo_unexecuted_blocks=1 00:14:29.253 00:14:29.253 ' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.253 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e6641964-6aa0-41fd-94a5-92b9dd740ba6 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=438c721a-4211-4c05-8745-beb724faf49c 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a7c1a9a1-ee73-478b-950f-babe6cb87a2c 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:29.254 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:37.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.390 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:37.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:37.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:37.391 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:14:37.391 00:14:37.391 --- 10.0.0.2 ping statistics --- 00:14:37.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.391 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:14:37.391 00:14:37.391 --- 10.0.0.1 ping statistics --- 00:14:37.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.391 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.391 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1903724 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1903724 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1903724 ']' 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.392 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 [2024-11-20 16:57:28.859306] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:14:37.392 [2024-11-20 16:57:28.859377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.392 [2024-11-20 16:57:28.962228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.392 [2024-11-20 16:57:29.012560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.392 [2024-11-20 16:57:29.012607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.392 [2024-11-20 16:57:29.012615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.392 [2024-11-20 16:57:29.012628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.392 [2024-11-20 16:57:29.012634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.392 [2024-11-20 16:57:29.013379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.652 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:37.913 [2024-11-20 16:57:29.884722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.913 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:37.913 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:37.913 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.173 Malloc1 00:14:38.173 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.173 Malloc2 00:14:38.173 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:38.433 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:38.693 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.693 [2024-11-20 16:57:30.834922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.693 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:38.693 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7c1a9a1-ee73-478b-950f-babe6cb87a2c -a 10.0.0.2 -s 4420 -i 4 00:14:38.954 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.954 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:38.954 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.954 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:38.954 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:40.866 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:40.866 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:40.866 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.866 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:40.866 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.866 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:40.866 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:40.866 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:41.126 [ 0]:0x1 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1cf07305edf34938a877adc63110e2e6 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1cf07305edf34938a877adc63110e2e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:41.126 [ 0]:0x1 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.126 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1cf07305edf34938a877adc63110e2e6 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1cf07305edf34938a877adc63110e2e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:41.386 [ 1]:0x2 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.386 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.646 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7c1a9a1-ee73-478b-950f-babe6cb87a2c -a 10.0.0.2 -s 4420 -i 4 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:41.907 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:43.817 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:43.817 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:43.817 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.817 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:43.817 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.817 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:44.078 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:44.078 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.078 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:44.079 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.079 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.079 [ 0]:0x2 00:14:44.079 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.079 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.338 [ 0]:0x1 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1cf07305edf34938a877adc63110e2e6 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1cf07305edf34938a877adc63110e2e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.338 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.338 [ 1]:0x2 00:14:44.339 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.339 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.599 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.860 [ 0]:0x2 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.860 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.119 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:45.119 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7c1a9a1-ee73-478b-950f-babe6cb87a2c -a 10.0.0.2 -s 4420 -i 4 00:14:45.379 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:45.379 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:45.379 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.379 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:45.379 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:45.379 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.291 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:47.551 [ 0]:0x1 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1cf07305edf34938a877adc63110e2e6 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1cf07305edf34938a877adc63110e2e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:47.551 [ 1]:0x2 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.551 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:47.813 [ 0]:0x2 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:47.813 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:48.074 [2024-11-20 16:57:40.100268] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:48.074 request: 00:14:48.074 { 00:14:48.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.074 "nsid": 2, 00:14:48.074 "host": "nqn.2016-06.io.spdk:host1", 00:14:48.074 "method": "nvmf_ns_remove_host", 00:14:48.074 "req_id": 1 00:14:48.074 } 00:14:48.074 Got JSON-RPC error response 00:14:48.074 response: 00:14:48.074 { 00:14:48.074 "code": -32602, 00:14:48.074 "message": "Invalid parameters" 00:14:48.074 } 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.074 [ 0]:0x2 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4574af0a6af54b3fa304de0b661ec217 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4574af0a6af54b3fa304de0b661ec217 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:48.074 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1906116 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1906116 /var/tmp/host.sock 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1906116 ']' 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:48.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.334 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 [2024-11-20 16:57:40.346749] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:14:48.334 [2024-11-20 16:57:40.346820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906116 ] 00:14:48.334 [2024-11-20 16:57:40.436082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.334 [2024-11-20 16:57:40.472147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.272 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.273 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:49.273 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.273 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:49.532 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e6641964-6aa0-41fd-94a5-92b9dd740ba6 00:14:49.532 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:49.532 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E66419646AA041FD94A592B9DD740BA6 -i 00:14:49.532 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 438c721a-4211-4c05-8745-beb724faf49c 00:14:49.532 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:49.533 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 438C721A42114C058745BEB724FAF49C -i 00:14:49.792 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.051 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:50.311 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:50.311 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:50.573 nvme0n1 00:14:50.573 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:50.573 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:51.142 nvme1n2 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:51.142 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:51.402 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e6641964-6aa0-41fd-94a5-92b9dd740ba6 == \e\6\6\4\1\9\6\4\-\6\a\a\0\-\4\1\f\d\-\9\4\a\5\-\9\2\b\9\d\d\7\4\0\b\a\6 ]] 00:14:51.402 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:51.402 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:51.402 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:51.662 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 438c721a-4211-4c05-8745-beb724faf49c == \4\3\8\c\7\2\1\a\-\4\2\1\1\-\4\c\0\5\-\8\7\4\5\-\b\e\b\7\2\4\f\a\f\4\9\c ]] 00:14:51.662 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.662 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e6641964-6aa0-41fd-94a5-92b9dd740ba6 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E66419646AA041FD94A592B9DD740BA6 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E66419646AA041FD94A592B9DD740BA6 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:51.922 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E66419646AA041FD94A592B9DD740BA6 00:14:52.182 [2024-11-20 16:57:44.143151] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:52.182 [2024-11-20 16:57:44.143183] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:52.182 [2024-11-20 16:57:44.143190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.182 request: 00:14:52.182 { 00:14:52.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.183 "namespace": { 00:14:52.183 "bdev_name": "invalid", 00:14:52.183 "nsid": 1, 00:14:52.183 "nguid": "E66419646AA041FD94A592B9DD740BA6", 00:14:52.183 "no_auto_visible": false, 00:14:52.183 "hide_metadata": false 00:14:52.183 }, 00:14:52.183 "method": "nvmf_subsystem_add_ns", 00:14:52.183 "req_id": 1 00:14:52.183 } 00:14:52.183 Got JSON-RPC error response 00:14:52.183 response: 00:14:52.183 { 00:14:52.183 "code": -32602, 00:14:52.183 "message": "Invalid parameters" 00:14:52.183 } 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e6641964-6aa0-41fd-94a5-92b9dd740ba6 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:52.183 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E66419646AA041FD94A592B9DD740BA6 -i 00:14:52.443 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:54.355 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:54.355 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:54.355 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1906116 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1906116 ']' 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1906116 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906116 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906116' 00:14:54.616 killing process with pid 1906116 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1906116 00:14:54.616 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1906116 00:14:54.876 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.876 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:54.876 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:54.876 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.876 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:54.876 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.876 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:54.876 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.876 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.876 rmmod nvme_tcp 00:14:54.876 rmmod nvme_fabrics 00:14:54.876 rmmod nvme_keyring 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1903724 ']' 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1903724 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1903724 ']' 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1903724 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1903724 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1903724' 00:14:55.136 killing process with pid 1903724 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1903724 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1903724 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.136 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:55.137 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.137 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.137 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:57.680 00:14:57.680 real 0m28.364s 00:14:57.680 user 0m32.502s 00:14:57.680 sys 0m8.239s 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:57.680 ************************************ 00:14:57.680 END TEST nvmf_ns_masking 00:14:57.680 ************************************ 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.680 ************************************ 00:14:57.680 START TEST nvmf_nvme_cli 00:14:57.680 ************************************ 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:57.680 * Looking for test storage... 00:14:57.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:57.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.680 --rc genhtml_branch_coverage=1 00:14:57.680 --rc genhtml_function_coverage=1 00:14:57.680 --rc genhtml_legend=1 00:14:57.680 --rc geninfo_all_blocks=1 00:14:57.680 --rc geninfo_unexecuted_blocks=1 00:14:57.680 00:14:57.680 ' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:57.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.680 --rc genhtml_branch_coverage=1 00:14:57.680 --rc genhtml_function_coverage=1 00:14:57.680 --rc genhtml_legend=1 00:14:57.680 --rc geninfo_all_blocks=1 00:14:57.680 --rc geninfo_unexecuted_blocks=1 00:14:57.680 00:14:57.680 ' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:57.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.680 --rc genhtml_branch_coverage=1 00:14:57.680 --rc genhtml_function_coverage=1 00:14:57.680 --rc genhtml_legend=1 00:14:57.680 --rc geninfo_all_blocks=1 00:14:57.680 --rc geninfo_unexecuted_blocks=1 00:14:57.680 00:14:57.680 ' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:57.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.680 --rc genhtml_branch_coverage=1 00:14:57.680 --rc genhtml_function_coverage=1 00:14:57.680 --rc genhtml_legend=1 00:14:57.680 --rc geninfo_all_blocks=1 00:14:57.680 --rc geninfo_unexecuted_blocks=1 00:14:57.680 00:14:57.680 ' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.680 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:57.681 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:05.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:05.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:05.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:05.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.817 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:05.817 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.817 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.817 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.817 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:05.817 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:05.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:15:05.818 00:15:05.818 --- 10.0.0.2 ping statistics --- 00:15:05.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.818 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:15:05.818 00:15:05.818 --- 10.0.0.1 ping statistics --- 00:15:05.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.818 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1911619 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1911619 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1911619 ']' 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.818 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.818 [2024-11-20 16:57:57.252815] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:15:05.818 [2024-11-20 16:57:57.252887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.818 [2024-11-20 16:57:57.351227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.818 [2024-11-20 16:57:57.405259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.818 [2024-11-20 16:57:57.405312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.818 [2024-11-20 16:57:57.405322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.818 [2024-11-20 16:57:57.405329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.818 [2024-11-20 16:57:57.405341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.818 [2024-11-20 16:57:57.407729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.818 [2024-11-20 16:57:57.407891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.818 [2024-11-20 16:57:57.408055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.818 [2024-11-20 16:57:57.408057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 [2024-11-20 16:57:58.121518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 Malloc0 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 Malloc1 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 [2024-11-20 16:57:58.233274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.079 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:06.340 00:15:06.340 Discovery Log Number of Records 2, Generation counter 2 00:15:06.340 =====Discovery Log Entry 0====== 00:15:06.340 trtype: tcp 00:15:06.340 adrfam: ipv4 00:15:06.340 subtype: current discovery subsystem 00:15:06.340 treq: not required 00:15:06.340 portid: 0 00:15:06.340 trsvcid: 4420 00:15:06.340 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:06.340 traddr: 10.0.0.2 00:15:06.340 eflags: explicit discovery connections, duplicate discovery information 00:15:06.340 sectype: none 00:15:06.340 =====Discovery Log Entry 1====== 00:15:06.340 trtype: tcp 00:15:06.340 adrfam: ipv4 00:15:06.340 subtype: nvme subsystem 00:15:06.340 treq: not required 00:15:06.340 portid: 0 00:15:06.340 trsvcid: 4420 00:15:06.340 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:06.340 traddr: 10.0.0.2 00:15:06.340 eflags: none 00:15:06.340 sectype: none 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:06.340 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.251 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:08.251 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:08.251 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.251 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:08.251 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:08.251 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:10.162 /dev/nvme0n2 ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:10.162 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.422 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.422 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:10.422 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:10.422 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.422 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:10.422 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.682 rmmod nvme_tcp 00:15:10.682 rmmod nvme_fabrics 00:15:10.682 rmmod nvme_keyring 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1911619 ']' 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1911619 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1911619 ']' 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1911619 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1911619 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1911619' 00:15:10.682 killing process with pid 1911619 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1911619 00:15:10.682 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1911619 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.942 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.943 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.883 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:12.883 00:15:12.883 real 0m15.534s 00:15:12.883 user 0m24.130s 00:15:12.883 sys 0m6.370s 00:15:12.883 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.883 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.883 ************************************ 00:15:12.883 END TEST nvmf_nvme_cli 00:15:12.883 ************************************ 00:15:12.883 16:58:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:12.883 16:58:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.883 16:58:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.883 16:58:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.883 16:58:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.883 ************************************ 00:15:12.884 START TEST nvmf_vfio_user 00:15:12.884 ************************************ 00:15:12.884 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:13.146 * Looking for test storage... 00:15:13.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:13.146 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.200 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:13.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.200 --rc genhtml_branch_coverage=1 00:15:13.200 --rc genhtml_function_coverage=1 00:15:13.200 --rc genhtml_legend=1 00:15:13.200 --rc geninfo_all_blocks=1 00:15:13.200 --rc geninfo_unexecuted_blocks=1 00:15:13.200 00:15:13.200 ' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.201 --rc genhtml_branch_coverage=1 00:15:13.201 --rc genhtml_function_coverage=1 00:15:13.201 --rc genhtml_legend=1 00:15:13.201 --rc geninfo_all_blocks=1 00:15:13.201 --rc geninfo_unexecuted_blocks=1 00:15:13.201 00:15:13.201 ' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.201 --rc genhtml_branch_coverage=1 00:15:13.201 --rc genhtml_function_coverage=1 00:15:13.201 --rc genhtml_legend=1 00:15:13.201 --rc geninfo_all_blocks=1 00:15:13.201 --rc geninfo_unexecuted_blocks=1 00:15:13.201 00:15:13.201 ' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.201 --rc genhtml_branch_coverage=1 00:15:13.201 --rc genhtml_function_coverage=1 00:15:13.201 --rc genhtml_legend=1 00:15:13.201 --rc geninfo_all_blocks=1 00:15:13.201 --rc geninfo_unexecuted_blocks=1 00:15:13.201 00:15:13.201 ' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1913434 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1913434' 00:15:13.201 Process pid: 1913434 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1913434 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1913434 ']' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.201 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:13.462 [2024-11-20 16:58:05.363096] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:15:13.462 [2024-11-20 16:58:05.363179] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.462 [2024-11-20 16:58:05.450850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.462 [2024-11-20 16:58:05.481358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.462 [2024-11-20 16:58:05.481385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.462 [2024-11-20 16:58:05.481391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.462 [2024-11-20 16:58:05.481400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.462 [2024-11-20 16:58:05.481404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.462 [2024-11-20 16:58:05.482630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.462 [2024-11-20 16:58:05.482781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.462 [2024-11-20 16:58:05.482925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.462 [2024-11-20 16:58:05.482927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.031 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.031 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:14.031 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:15.413 Malloc1 00:15:15.413 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:15.672 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:15.931 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:15.931 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.931 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:15.931 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:16.192 Malloc2 00:15:16.192 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:16.452 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:16.712 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:16.712 [2024-11-20 16:58:08.866196] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:15:16.712 [2024-11-20 16:58:08.866239] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914125 ] 00:15:16.974 [2024-11-20 16:58:08.907466] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:16.974 [2024-11-20 16:58:08.909770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:16.974 [2024-11-20 16:58:08.909787] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3e528d6000 00:15:16.974 [2024-11-20 16:58:08.910761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.911763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.912771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.913775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.914775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.915787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.916786] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.917799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.974 [2024-11-20 16:58:08.918806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:16.974 [2024-11-20 16:58:08.918813] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3e528cb000 00:15:16.974 [2024-11-20 16:58:08.919727] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.974 [2024-11-20 16:58:08.932442] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:16.974 [2024-11-20 16:58:08.932463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:16.974 [2024-11-20 16:58:08.937908] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:16.974 [2024-11-20 16:58:08.937944] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:16.974 [2024-11-20 16:58:08.938010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:16.974 [2024-11-20 16:58:08.938024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:16.974 [2024-11-20 16:58:08.938028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:16.974 [2024-11-20 16:58:08.938904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:16.974 [2024-11-20 16:58:08.938912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:16.974 [2024-11-20 16:58:08.938917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:16.974 [2024-11-20 16:58:08.939910] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:16.974 [2024-11-20 16:58:08.939917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:16.974 [2024-11-20 16:58:08.939922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.974 [2024-11-20 16:58:08.940920] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:16.974 [2024-11-20 16:58:08.940926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.974 [2024-11-20 16:58:08.941926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:16.974 [2024-11-20 16:58:08.941932] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:16.974 [2024-11-20 16:58:08.941936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:16.974 [2024-11-20 16:58:08.941941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.974 [2024-11-20 16:58:08.942047] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:16.974 [2024-11-20 16:58:08.942051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.974 [2024-11-20 16:58:08.942055] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:16.974 [2024-11-20 16:58:08.942930] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:16.974 [2024-11-20 16:58:08.943941] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:16.974 [2024-11-20 16:58:08.944950] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:16.974 [2024-11-20 16:58:08.945948] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.974 [2024-11-20 16:58:08.946001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.974 [2024-11-20 16:58:08.946961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:16.974 [2024-11-20 16:58:08.946966] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.974 [2024-11-20 16:58:08.946970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:16.974 [2024-11-20 16:58:08.946985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:16.974 [2024-11-20 16:58:08.946992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.974 [2024-11-20 16:58:08.947004] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.974 [2024-11-20 16:58:08.947007] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.974 [2024-11-20 16:58:08.947010] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.974 [2024-11-20 16:58:08.947021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.974 [2024-11-20 16:58:08.947057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:16.974 [2024-11-20 16:58:08.947065] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:16.974 [2024-11-20 16:58:08.947069] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:16.974 [2024-11-20 16:58:08.947072] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:16.975 [2024-11-20 16:58:08.947075] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:16.975 [2024-11-20 16:58:08.947080] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:16.975 [2024-11-20 16:58:08.947083] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:16.975 [2024-11-20 16:58:08.947087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.975 [2024-11-20 16:58:08.947129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.975 [2024-11-20 16:58:08.947135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.975 [2024-11-20 16:58:08.947141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.975 [2024-11-20 16:58:08.947145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947173] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:16.975 [2024-11-20 16:58:08.947176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947257] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:16.975 [2024-11-20 16:58:08.947260] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:16.975 [2024-11-20 16:58:08.947262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.975 [2024-11-20 16:58:08.947267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947284] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:16.975 [2024-11-20 16:58:08.947294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947305] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.975 [2024-11-20 16:58:08.947308] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.975 [2024-11-20 16:58:08.947310] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.975 [2024-11-20 16:58:08.947315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947357] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.975 [2024-11-20 16:58:08.947360] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.975 [2024-11-20 16:58:08.947362] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.975 [2024-11-20 16:58:08.947366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947408] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:16.975 [2024-11-20 16:58:08.947411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:16.975 [2024-11-20 16:58:08.947415] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:16.975 [2024-11-20 16:58:08.947429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947495] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:16.975 [2024-11-20 16:58:08.947499] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:16.975 [2024-11-20 16:58:08.947501] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:16.975 [2024-11-20 16:58:08.947504] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:16.975 [2024-11-20 16:58:08.947506] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:16.975 [2024-11-20 16:58:08.947510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:16.975 [2024-11-20 16:58:08.947516] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:16.975 [2024-11-20 16:58:08.947519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:16.975 [2024-11-20 16:58:08.947521] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.975 [2024-11-20 16:58:08.947527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947532] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:16.975 [2024-11-20 16:58:08.947535] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.975 [2024-11-20 16:58:08.947537] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.975 [2024-11-20 16:58:08.947541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947547] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:16.975 [2024-11-20 16:58:08.947550] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:16.975 [2024-11-20 16:58:08.947552] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.975 [2024-11-20 16:58:08.947557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:16.975 [2024-11-20 16:58:08.947562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:16.975 [2024-11-20 16:58:08.947570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:16.976 [2024-11-20 16:58:08.947577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:16.976 [2024-11-20 16:58:08.947582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:16.976 ===================================================== 00:15:16.976 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.976 ===================================================== 00:15:16.976 Controller Capabilities/Features 00:15:16.976 ================================ 00:15:16.976 Vendor ID: 4e58 00:15:16.976 Subsystem Vendor ID: 4e58 00:15:16.976 Serial Number: SPDK1 00:15:16.976 Model Number: SPDK bdev Controller 00:15:16.976 Firmware Version: 25.01 00:15:16.976 Recommended Arb Burst: 6 00:15:16.976 IEEE OUI Identifier: 8d 6b 50 00:15:16.976 Multi-path I/O 00:15:16.976 May have multiple subsystem ports: Yes 00:15:16.976 May have multiple controllers: Yes 00:15:16.976 Associated with SR-IOV VF: No 00:15:16.976 Max Data Transfer Size: 131072 00:15:16.976 Max Number of Namespaces: 32 00:15:16.976 Max Number of I/O Queues: 127 00:15:16.976 NVMe Specification Version (VS): 1.3 00:15:16.976 NVMe Specification Version (Identify): 1.3 00:15:16.976 Maximum Queue Entries: 256 00:15:16.976 Contiguous Queues Required: Yes 00:15:16.976 Arbitration Mechanisms Supported 00:15:16.976 Weighted Round Robin: Not Supported 00:15:16.976 Vendor Specific: Not Supported 00:15:16.976 Reset Timeout: 15000 ms 00:15:16.976 Doorbell Stride: 4 bytes 00:15:16.976 NVM Subsystem Reset: Not Supported 00:15:16.976 Command Sets Supported 00:15:16.976 NVM Command Set: Supported 00:15:16.976 Boot Partition: Not Supported 00:15:16.976 Memory Page Size Minimum: 4096 bytes 00:15:16.976 Memory Page Size Maximum: 4096 bytes 00:15:16.976 Persistent Memory Region: Not Supported 00:15:16.976 Optional Asynchronous Events Supported 00:15:16.976 Namespace Attribute Notices: Supported 00:15:16.976 Firmware Activation Notices: Not Supported 00:15:16.976 ANA Change Notices: Not Supported 00:15:16.976 PLE Aggregate Log Change Notices: Not Supported 00:15:16.976 LBA Status Info Alert Notices: Not Supported 00:15:16.976 EGE Aggregate Log Change Notices: Not Supported 00:15:16.976 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.976 Zone Descriptor Change Notices: Not Supported 00:15:16.976 Discovery Log Change Notices: Not Supported 00:15:16.976 Controller Attributes 00:15:16.976 128-bit Host Identifier: Supported 00:15:16.976 Non-Operational Permissive Mode: Not Supported 00:15:16.976 NVM Sets: Not Supported 00:15:16.976 Read Recovery Levels: Not Supported 00:15:16.976 Endurance Groups: Not Supported 00:15:16.976 Predictable Latency Mode: Not Supported 00:15:16.976 Traffic Based Keep ALive: Not Supported 00:15:16.976 Namespace Granularity: Not Supported 00:15:16.976 SQ Associations: Not Supported 00:15:16.976 UUID List: Not Supported 00:15:16.976 Multi-Domain Subsystem: Not Supported 00:15:16.976 Fixed Capacity Management: Not Supported 00:15:16.976 Variable Capacity Management: Not Supported 00:15:16.976 Delete Endurance Group: Not Supported 00:15:16.976 Delete NVM Set: Not Supported 00:15:16.976 Extended LBA Formats Supported: Not Supported 00:15:16.976 Flexible Data Placement Supported: Not Supported 00:15:16.976 00:15:16.976 Controller Memory Buffer Support 00:15:16.976 ================================ 00:15:16.976 Supported: No 00:15:16.976 00:15:16.976 Persistent Memory Region Support 00:15:16.976 ================================ 00:15:16.976 Supported: No 00:15:16.976 00:15:16.976 Admin Command Set Attributes 00:15:16.976 ============================ 00:15:16.976 Security Send/Receive: Not Supported 00:15:16.976 Format NVM: Not Supported 00:15:16.976 Firmware Activate/Download: Not Supported 00:15:16.976 Namespace Management: Not Supported 00:15:16.976 Device Self-Test: Not Supported 00:15:16.976 Directives: Not Supported 00:15:16.976 NVMe-MI: Not Supported 00:15:16.976 Virtualization Management: Not Supported 00:15:16.976 Doorbell Buffer Config: Not Supported 00:15:16.976 Get LBA Status Capability: Not Supported 00:15:16.976 Command & Feature Lockdown Capability: Not Supported 00:15:16.976 Abort Command Limit: 4 00:15:16.976 Async Event Request Limit: 4 00:15:16.976 Number of Firmware Slots: N/A 00:15:16.976 Firmware Slot 1 Read-Only: N/A 00:15:16.976 Firmware Activation Without Reset: N/A 00:15:16.976 Multiple Update Detection Support: N/A 00:15:16.976 Firmware Update Granularity: No Information Provided 00:15:16.976 Per-Namespace SMART Log: No 00:15:16.976 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.976 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:16.976 Command Effects Log Page: Supported 00:15:16.976 Get Log Page Extended Data: Supported 00:15:16.976 Telemetry Log Pages: Not Supported 00:15:16.976 Persistent Event Log Pages: Not Supported 00:15:16.976 Supported Log Pages Log Page: May Support 00:15:16.976 Commands Supported & Effects Log Page: Not Supported 00:15:16.976 Feature Identifiers & Effects Log Page:May Support 00:15:16.976 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.976 Data Area 4 for Telemetry Log: Not Supported 00:15:16.976 Error Log Page Entries Supported: 128 00:15:16.976 Keep Alive: Supported 00:15:16.976 Keep Alive Granularity: 10000 ms 00:15:16.976 00:15:16.976 NVM Command Set Attributes 00:15:16.976 ========================== 00:15:16.976 Submission Queue Entry Size 00:15:16.976 Max: 64 00:15:16.976 Min: 64 00:15:16.976 Completion Queue Entry Size 00:15:16.976 Max: 16 00:15:16.976 Min: 16 00:15:16.976 Number of Namespaces: 32 00:15:16.976 Compare Command: Supported 00:15:16.976 Write Uncorrectable Command: Not Supported 00:15:16.976 Dataset Management Command: Supported 00:15:16.976 Write Zeroes Command: Supported 00:15:16.976 Set Features Save Field: Not Supported 00:15:16.976 Reservations: Not Supported 00:15:16.976 Timestamp: Not Supported 00:15:16.976 Copy: Supported 00:15:16.976 Volatile Write Cache: Present 00:15:16.976 Atomic Write Unit (Normal): 1 00:15:16.976 Atomic Write Unit (PFail): 1 00:15:16.976 Atomic Compare & Write Unit: 1 00:15:16.976 Fused Compare & Write: Supported 00:15:16.976 Scatter-Gather List 00:15:16.976 SGL Command Set: Supported (Dword aligned) 00:15:16.976 SGL Keyed: Not Supported 00:15:16.976 SGL Bit Bucket Descriptor: Not Supported 00:15:16.976 SGL Metadata Pointer: Not Supported 00:15:16.976 Oversized SGL: Not Supported 00:15:16.976 SGL Metadata Address: Not Supported 00:15:16.976 SGL Offset: Not Supported 00:15:16.976 Transport SGL Data Block: Not Supported 00:15:16.976 Replay Protected Memory Block: Not Supported 00:15:16.976 00:15:16.976 Firmware Slot Information 00:15:16.976 ========================= 00:15:16.976 Active slot: 1 00:15:16.976 Slot 1 Firmware Revision: 25.01 00:15:16.976 00:15:16.976 00:15:16.976 Commands Supported and Effects 00:15:16.976 ============================== 00:15:16.976 Admin Commands 00:15:16.976 -------------- 00:15:16.976 Get Log Page (02h): Supported 00:15:16.976 Identify (06h): Supported 00:15:16.976 Abort (08h): Supported 00:15:16.976 Set Features (09h): Supported 00:15:16.976 Get Features (0Ah): Supported 00:15:16.976 Asynchronous Event Request (0Ch): Supported 00:15:16.976 Keep Alive (18h): Supported 00:15:16.976 I/O Commands 00:15:16.976 ------------ 00:15:16.976 Flush (00h): Supported LBA-Change 00:15:16.976 Write (01h): Supported LBA-Change 00:15:16.976 Read (02h): Supported 00:15:16.976 Compare (05h): Supported 00:15:16.976 Write Zeroes (08h): Supported LBA-Change 00:15:16.976 Dataset Management (09h): Supported LBA-Change 00:15:16.976 Copy (19h): Supported LBA-Change 00:15:16.976 00:15:16.976 Error Log 00:15:16.976 ========= 00:15:16.976 00:15:16.976 Arbitration 00:15:16.976 =========== 00:15:16.976 Arbitration Burst: 1 00:15:16.976 00:15:16.976 Power Management 00:15:16.976 ================ 00:15:16.976 Number of Power States: 1 00:15:16.976 Current Power State: Power State #0 00:15:16.976 Power State #0: 00:15:16.976 Max Power: 0.00 W 00:15:16.976 Non-Operational State: Operational 00:15:16.976 Entry Latency: Not Reported 00:15:16.976 Exit Latency: Not Reported 00:15:16.976 Relative Read Throughput: 0 00:15:16.976 Relative Read Latency: 0 00:15:16.976 Relative Write Throughput: 0 00:15:16.976 Relative Write Latency: 0 00:15:16.976 Idle Power: Not Reported 00:15:16.976 Active Power: Not Reported 00:15:16.976 Non-Operational Permissive Mode: Not Supported 00:15:16.976 00:15:16.976 Health Information 00:15:16.977 ================== 00:15:16.977 Critical Warnings: 00:15:16.977 Available Spare Space: OK 00:15:16.977 Temperature: OK 00:15:16.977 Device Reliability: OK 00:15:16.977 Read Only: No 00:15:16.977 Volatile Memory Backup: OK 00:15:16.977 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:16.977 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:16.977 Available Spare: 0% 00:15:16.977 Available Sp[2024-11-20 16:58:08.947655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:16.977 [2024-11-20 16:58:08.947665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:16.977 [2024-11-20 16:58:08.947685] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:16.977 [2024-11-20 16:58:08.947693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.977 [2024-11-20 16:58:08.947697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.977 [2024-11-20 16:58:08.947702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.977 [2024-11-20 16:58:08.947706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.977 [2024-11-20 16:58:08.947968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:16.977 [2024-11-20 16:58:08.947976] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:16.977 [2024-11-20 16:58:08.948968] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.977 [2024-11-20 16:58:08.949008] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:16.977 [2024-11-20 16:58:08.949013] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:16.977 [2024-11-20 16:58:08.949972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:16.977 [2024-11-20 16:58:08.949980] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:16.977 [2024-11-20 16:58:08.950035] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:16.977 [2024-11-20 16:58:08.950997] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.977 are Threshold: 0% 00:15:16.977 Life Percentage Used: 0% 00:15:16.977 Data Units Read: 0 00:15:16.977 Data Units Written: 0 00:15:16.977 Host Read Commands: 0 00:15:16.977 Host Write Commands: 0 00:15:16.977 Controller Busy Time: 0 minutes 00:15:16.977 Power Cycles: 0 00:15:16.977 Power On Hours: 0 hours 00:15:16.977 Unsafe Shutdowns: 0 00:15:16.977 Unrecoverable Media Errors: 0 00:15:16.977 Lifetime Error Log Entries: 0 00:15:16.977 Warning Temperature Time: 0 minutes 00:15:16.977 Critical Temperature Time: 0 minutes 00:15:16.977 00:15:16.977 Number of Queues 00:15:16.977 ================ 00:15:16.977 Number of I/O Submission Queues: 127 00:15:16.977 Number of I/O Completion Queues: 127 00:15:16.977 00:15:16.977 Active Namespaces 00:15:16.977 ================= 00:15:16.977 Namespace ID:1 00:15:16.977 Error Recovery Timeout: Unlimited 00:15:16.977 Command Set Identifier: NVM (00h) 00:15:16.977 Deallocate: Supported 00:15:16.977 Deallocated/Unwritten Error: Not Supported 00:15:16.977 Deallocated Read Value: Unknown 00:15:16.977 Deallocate in Write Zeroes: Not Supported 00:15:16.977 Deallocated Guard Field: 0xFFFF 00:15:16.977 Flush: Supported 00:15:16.977 Reservation: Supported 00:15:16.977 Namespace Sharing Capabilities: Multiple Controllers 00:15:16.977 Size (in LBAs): 131072 (0GiB) 00:15:16.977 Capacity (in LBAs): 131072 (0GiB) 00:15:16.977 Utilization (in LBAs): 131072 (0GiB) 00:15:16.977 NGUID: 71B3F31FE1244647A902C6C221F5051E 00:15:16.977 UUID: 71b3f31f-e124-4647-a902-c6c221f5051e 00:15:16.977 Thin Provisioning: Not Supported 00:15:16.977 Per-NS Atomic Units: Yes 00:15:16.977 Atomic Boundary Size (Normal): 0 00:15:16.977 Atomic Boundary Size (PFail): 0 00:15:16.977 Atomic Boundary Offset: 0 00:15:16.977 Maximum Single Source Range Length: 65535 00:15:16.977 Maximum Copy Length: 65535 00:15:16.977 Maximum Source Range Count: 1 00:15:16.977 NGUID/EUI64 Never Reused: No 00:15:16.977 Namespace Write Protected: No 00:15:16.977 Number of LBA Formats: 1 00:15:16.977 Current LBA Format: LBA Format #00 00:15:16.977 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:16.977 00:15:16.977 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:16.977 [2024-11-20 16:58:09.138843] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.266 Initializing NVMe Controllers 00:15:22.266 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.266 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:22.266 Initialization complete. Launching workers. 00:15:22.266 ======================================================== 00:15:22.266 Latency(us) 00:15:22.266 Device Information : IOPS MiB/s Average min max 00:15:22.266 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39999.60 156.25 3199.89 855.06 7581.81 00:15:22.266 ======================================================== 00:15:22.266 Total : 39999.60 156.25 3199.89 855.06 7581.81 00:15:22.266 00:15:22.266 [2024-11-20 16:58:14.158978] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.266 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:22.266 [2024-11-20 16:58:14.349866] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.549 Initializing NVMe Controllers 00:15:27.549 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.549 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:27.549 Initialization complete. Launching workers. 00:15:27.549 ======================================================== 00:15:27.549 Latency(us) 00:15:27.549 Device Information : IOPS MiB/s Average min max 00:15:27.549 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16021.64 62.58 7988.67 6424.49 15962.35 00:15:27.549 ======================================================== 00:15:27.549 Total : 16021.64 62.58 7988.67 6424.49 15962.35 00:15:27.549 00:15:27.549 [2024-11-20 16:58:19.381352] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.549 16:58:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:27.549 [2024-11-20 16:58:19.582205] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.920 [2024-11-20 16:58:24.648325] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.920 Initializing NVMe Controllers 00:15:32.920 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.921 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:32.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:32.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:32.921 Initialization complete. Launching workers. 00:15:32.921 Starting thread on core 2 00:15:32.921 Starting thread on core 3 00:15:32.921 Starting thread on core 1 00:15:32.921 16:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:32.921 [2024-11-20 16:58:24.892476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.216 [2024-11-20 16:58:27.955759] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.216 Initializing NVMe Controllers 00:15:36.216 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.216 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.216 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:36.216 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:36.216 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:36.216 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:36.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:36.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:36.216 Initialization complete. Launching workers. 00:15:36.216 Starting thread on core 1 with urgent priority queue 00:15:36.216 Starting thread on core 2 with urgent priority queue 00:15:36.216 Starting thread on core 3 with urgent priority queue 00:15:36.216 Starting thread on core 0 with urgent priority queue 00:15:36.216 SPDK bdev Controller (SPDK1 ) core 0: 8142.33 IO/s 12.28 secs/100000 ios 00:15:36.216 SPDK bdev Controller (SPDK1 ) core 1: 11137.33 IO/s 8.98 secs/100000 ios 00:15:36.216 SPDK bdev Controller (SPDK1 ) core 2: 8104.00 IO/s 12.34 secs/100000 ios 00:15:36.216 SPDK bdev Controller (SPDK1 ) core 3: 15145.67 IO/s 6.60 secs/100000 ios 00:15:36.216 ======================================================== 00:15:36.216 00:15:36.216 16:58:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:36.216 [2024-11-20 16:58:28.201667] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.216 Initializing NVMe Controllers 00:15:36.216 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.216 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.216 Namespace ID: 1 size: 0GB 00:15:36.216 Initialization complete. 00:15:36.216 INFO: using host memory buffer for IO 00:15:36.216 Hello world! 00:15:36.216 [2024-11-20 16:58:28.234854] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.216 16:58:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:36.476 [2024-11-20 16:58:28.465763] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.415 Initializing NVMe Controllers 00:15:37.415 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.415 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.415 Initialization complete. Launching workers. 00:15:37.415 submit (in ns) avg, min, max = 6114.1, 2818.3, 3999476.7 00:15:37.415 complete (in ns) avg, min, max = 15669.5, 1641.7, 4107294.2 00:15:37.415 00:15:37.415 Submit histogram 00:15:37.415 ================ 00:15:37.415 Range in us Cumulative Count 00:15:37.415 2.813 - 2.827: 0.3351% ( 68) 00:15:37.415 2.827 - 2.840: 1.5473% ( 246) 00:15:37.415 2.840 - 2.853: 4.5779% ( 615) 00:15:37.415 2.853 - 2.867: 9.1854% ( 935) 00:15:37.415 2.867 - 2.880: 13.9654% ( 970) 00:15:37.415 2.880 - 2.893: 20.0512% ( 1235) 00:15:37.415 2.893 - 2.907: 26.9847% ( 1407) 00:15:37.415 2.907 - 2.920: 32.8143% ( 1183) 00:15:37.415 2.920 - 2.933: 38.1462% ( 1082) 00:15:37.415 2.933 - 2.947: 43.0395% ( 993) 00:15:37.416 2.947 - 2.960: 47.7505% ( 956) 00:15:37.416 2.960 - 2.973: 54.0137% ( 1271) 00:15:37.416 2.973 - 2.987: 63.1055% ( 1845) 00:15:37.416 2.987 - 3.000: 72.5127% ( 1909) 00:15:37.416 3.000 - 3.013: 80.3676% ( 1594) 00:15:37.416 3.013 - 3.027: 87.1335% ( 1373) 00:15:37.416 3.027 - 3.040: 92.4703% ( 1083) 00:15:37.416 3.040 - 3.053: 96.4766% ( 813) 00:15:37.416 3.053 - 3.067: 98.0782% ( 325) 00:15:37.416 3.067 - 3.080: 98.9307% ( 173) 00:15:37.416 3.080 - 3.093: 99.3200% ( 79) 00:15:37.416 3.093 - 3.107: 99.4924% ( 35) 00:15:37.416 3.107 - 3.120: 99.5614% ( 14) 00:15:37.416 3.120 - 3.133: 99.5861% ( 5) 00:15:37.416 3.133 - 3.147: 99.5959% ( 2) 00:15:37.416 3.147 - 3.160: 99.6008% ( 1) 00:15:37.416 3.240 - 3.253: 99.6058% ( 1) 00:15:37.416 3.573 - 3.600: 99.6107% ( 1) 00:15:37.416 3.813 - 3.840: 99.6156% ( 1) 00:15:37.416 3.840 - 3.867: 99.6206% ( 1) 00:15:37.416 4.133 - 4.160: 99.6255% ( 1) 00:15:37.416 4.240 - 4.267: 99.6353% ( 2) 00:15:37.416 4.427 - 4.453: 99.6403% ( 1) 00:15:37.416 4.560 - 4.587: 99.6452% ( 1) 00:15:37.416 4.720 - 4.747: 99.6501% ( 1) 00:15:37.416 4.773 - 4.800: 99.6551% ( 1) 00:15:37.416 4.827 - 4.853: 99.6698% ( 3) 00:15:37.416 4.907 - 4.933: 99.6748% ( 1) 00:15:37.416 5.013 - 5.040: 99.6797% ( 1) 00:15:37.416 5.067 - 5.093: 99.6846% ( 1) 00:15:37.416 5.093 - 5.120: 99.6895% ( 1) 00:15:37.416 5.120 - 5.147: 99.7043% ( 3) 00:15:37.416 5.227 - 5.253: 99.7093% ( 1) 00:15:37.416 5.280 - 5.307: 99.7142% ( 1) 00:15:37.416 5.467 - 5.493: 99.7191% ( 1) 00:15:37.416 5.573 - 5.600: 99.7240% ( 1) 00:15:37.416 5.627 - 5.653: 99.7290% ( 1) 00:15:37.416 5.733 - 5.760: 99.7339% ( 1) 00:15:37.416 5.760 - 5.787: 99.7388% ( 1) 00:15:37.416 5.893 - 5.920: 99.7487% ( 2) 00:15:37.416 5.973 - 6.000: 99.7536% ( 1) 00:15:37.416 6.000 - 6.027: 99.7585% ( 1) 00:15:37.416 6.027 - 6.053: 99.7635% ( 1) 00:15:37.416 6.053 - 6.080: 99.7684% ( 1) 00:15:37.416 6.080 - 6.107: 99.7733% ( 1) 00:15:37.416 6.187 - 6.213: 99.7832% ( 2) 00:15:37.416 6.213 - 6.240: 99.7881% ( 1) 00:15:37.416 6.240 - 6.267: 99.7930% ( 1) 00:15:37.416 6.293 - 6.320: 99.7980% ( 1) 00:15:37.416 6.320 - 6.347: 99.8029% ( 1) 00:15:37.416 6.347 - 6.373: 99.8078% ( 1) 00:15:37.416 6.373 - 6.400: 99.8177% ( 2) 00:15:37.416 6.400 - 6.427: 99.8275% ( 2) 00:15:37.416 6.453 - 6.480: 99.8374% ( 2) 00:15:37.416 6.507 - 6.533: 99.8423% ( 1) 00:15:37.416 6.533 - 6.560: 99.8472% ( 1) 00:15:37.416 6.560 - 6.587: 99.8522% ( 1) 00:15:37.416 6.587 - 6.613: 99.8571% ( 1) 00:15:37.416 6.640 - 6.667: 99.8620% ( 1) 00:15:37.416 6.667 - 6.693: 99.8768% ( 3) 00:15:37.416 6.693 - 6.720: 99.8817% ( 1) 00:15:37.416 6.773 - 6.800: 99.8867% ( 1) 00:15:37.416 6.827 - 6.880: 99.8916% ( 1) 00:15:37.416 6.933 - 6.987: 99.8965% ( 1) 00:15:37.416 7.200 - 7.253: 99.9014% ( 1) 00:15:37.416 7.253 - 7.307: 99.9064% ( 1) 00:15:37.416 7.467 - 7.520: 99.9113% ( 1) 00:15:37.416 7.733 - 7.787: 99.9162% ( 1) 00:15:37.416 11.147 - 11.200: 99.9212% ( 1) 00:15:37.416 [2024-11-20 16:58:29.486462] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.416 3986.773 - 4014.080: 100.0000% ( 16) 00:15:37.416 00:15:37.416 Complete histogram 00:15:37.416 ================== 00:15:37.416 Range in us Cumulative Count 00:15:37.416 1.640 - 1.647: 0.1971% ( 40) 00:15:37.416 1.647 - 1.653: 0.7195% ( 106) 00:15:37.416 1.653 - 1.660: 0.8229% ( 21) 00:15:37.416 1.660 - 1.667: 0.9067% ( 17) 00:15:37.416 1.667 - 1.673: 0.9708% ( 13) 00:15:37.416 1.673 - 1.680: 0.9905% ( 4) 00:15:37.416 1.680 - 1.687: 15.3206% ( 2908) 00:15:37.416 1.687 - 1.693: 50.0616% ( 7050) 00:15:37.416 1.693 - 1.700: 54.9943% ( 1001) 00:15:37.416 1.700 - 1.707: 65.7616% ( 2185) 00:15:37.416 1.707 - 1.720: 78.4211% ( 2569) 00:15:37.416 1.720 - 1.733: 82.7724% ( 883) 00:15:37.416 1.733 - 1.747: 84.2507% ( 300) 00:15:37.416 1.747 - 1.760: 90.1247% ( 1192) 00:15:37.416 1.760 - 1.773: 95.4073% ( 1072) 00:15:37.416 1.773 - 1.787: 97.9599% ( 518) 00:15:37.416 1.787 - 1.800: 99.1426% ( 240) 00:15:37.416 1.800 - 1.813: 99.3495% ( 42) 00:15:37.416 1.813 - 1.827: 99.3840% ( 7) 00:15:37.416 1.827 - 1.840: 99.3890% ( 1) 00:15:37.416 1.840 - 1.853: 99.3939% ( 1) 00:15:37.416 1.853 - 1.867: 99.4037% ( 2) 00:15:37.416 3.333 - 3.347: 99.4087% ( 1) 00:15:37.416 3.520 - 3.547: 99.4136% ( 1) 00:15:37.416 3.547 - 3.573: 99.4185% ( 1) 00:15:37.416 3.733 - 3.760: 99.4234% ( 1) 00:15:37.416 4.053 - 4.080: 99.4284% ( 1) 00:15:37.416 4.107 - 4.133: 99.4333% ( 1) 00:15:37.416 4.133 - 4.160: 99.4382% ( 1) 00:15:37.416 4.160 - 4.187: 99.4432% ( 1) 00:15:37.416 4.213 - 4.240: 99.4481% ( 1) 00:15:37.416 4.453 - 4.480: 99.4530% ( 1) 00:15:37.416 4.480 - 4.507: 99.4579% ( 1) 00:15:37.416 4.720 - 4.747: 99.4678% ( 2) 00:15:37.416 4.773 - 4.800: 99.4727% ( 1) 00:15:37.416 4.827 - 4.853: 99.4777% ( 1) 00:15:37.416 4.853 - 4.880: 99.4875% ( 2) 00:15:37.416 4.960 - 4.987: 99.4924% ( 1) 00:15:37.416 5.013 - 5.040: 99.5023% ( 2) 00:15:37.416 5.040 - 5.067: 99.5072% ( 1) 00:15:37.416 5.093 - 5.120: 99.5121% ( 1) 00:15:37.416 5.173 - 5.200: 99.5171% ( 1) 00:15:37.416 5.253 - 5.280: 99.5220% ( 1) 00:15:37.416 5.280 - 5.307: 99.5269% ( 1) 00:15:37.416 5.307 - 5.333: 99.5368% ( 2) 00:15:37.416 5.360 - 5.387: 99.5417% ( 1) 00:15:37.416 5.413 - 5.440: 99.5466% ( 1) 00:15:37.416 5.440 - 5.467: 99.5516% ( 1) 00:15:37.416 5.493 - 5.520: 99.5565% ( 1) 00:15:37.416 5.547 - 5.573: 99.5664% ( 2) 00:15:37.416 5.573 - 5.600: 99.5713% ( 1) 00:15:37.416 5.627 - 5.653: 99.5762% ( 1) 00:15:37.416 5.680 - 5.707: 99.5811% ( 1) 00:15:37.416 5.733 - 5.760: 99.5861% ( 1) 00:15:37.416 5.813 - 5.840: 99.5959% ( 2) 00:15:37.416 5.947 - 5.973: 99.6058% ( 2) 00:15:37.416 6.027 - 6.053: 99.6107% ( 1) 00:15:37.416 6.213 - 6.240: 99.6156% ( 1) 00:15:37.416 6.240 - 6.267: 99.6206% ( 1) 00:15:37.416 6.267 - 6.293: 99.6255% ( 1) 00:15:37.416 6.507 - 6.533: 99.6304% ( 1) 00:15:37.416 7.733 - 7.787: 99.6353% ( 1) 00:15:37.416 9.173 - 9.227: 99.6403% ( 1) 00:15:37.416 11.627 - 11.680: 99.6452% ( 1) 00:15:37.416 12.320 - 12.373: 99.6501% ( 1) 00:15:37.417 3413.333 - 3426.987: 99.6551% ( 1) 00:15:37.417 3986.773 - 4014.080: 99.9951% ( 69) 00:15:37.417 4096.000 - 4123.307: 100.0000% ( 1) 00:15:37.417 00:15:37.417 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:37.417 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:37.417 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:37.417 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:37.417 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.677 [ 00:15:37.677 { 00:15:37.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.677 "subtype": "Discovery", 00:15:37.677 "listen_addresses": [], 00:15:37.677 "allow_any_host": true, 00:15:37.677 "hosts": [] 00:15:37.677 }, 00:15:37.677 { 00:15:37.677 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.677 "subtype": "NVMe", 00:15:37.677 "listen_addresses": [ 00:15:37.677 { 00:15:37.677 "trtype": "VFIOUSER", 00:15:37.677 "adrfam": "IPv4", 00:15:37.677 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.677 "trsvcid": "0" 00:15:37.677 } 00:15:37.677 ], 00:15:37.677 "allow_any_host": true, 00:15:37.677 "hosts": [], 00:15:37.677 "serial_number": "SPDK1", 00:15:37.677 "model_number": "SPDK bdev Controller", 00:15:37.677 "max_namespaces": 32, 00:15:37.677 "min_cntlid": 1, 00:15:37.677 "max_cntlid": 65519, 00:15:37.677 "namespaces": [ 00:15:37.677 { 00:15:37.677 "nsid": 1, 00:15:37.677 "bdev_name": "Malloc1", 00:15:37.677 "name": "Malloc1", 00:15:37.677 "nguid": "71B3F31FE1244647A902C6C221F5051E", 00:15:37.677 "uuid": "71b3f31f-e124-4647-a902-c6c221f5051e" 00:15:37.677 } 00:15:37.677 ] 00:15:37.677 }, 00:15:37.677 { 00:15:37.677 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.677 "subtype": "NVMe", 00:15:37.677 "listen_addresses": [ 00:15:37.677 { 00:15:37.677 "trtype": "VFIOUSER", 00:15:37.677 "adrfam": "IPv4", 00:15:37.677 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.677 "trsvcid": "0" 00:15:37.677 } 00:15:37.677 ], 00:15:37.677 "allow_any_host": true, 00:15:37.677 "hosts": [], 00:15:37.677 "serial_number": "SPDK2", 00:15:37.677 "model_number": "SPDK bdev Controller", 00:15:37.677 "max_namespaces": 32, 00:15:37.677 "min_cntlid": 1, 00:15:37.677 "max_cntlid": 65519, 00:15:37.677 "namespaces": [ 00:15:37.677 { 00:15:37.677 "nsid": 1, 00:15:37.677 "bdev_name": "Malloc2", 00:15:37.677 "name": "Malloc2", 00:15:37.677 "nguid": "2235033A52AE420189F329F40EB54BA5", 00:15:37.677 "uuid": "2235033a-52ae-4201-89f3-29f40eb54ba5" 00:15:37.677 } 00:15:37.677 ] 00:15:37.677 } 00:15:37.677 ] 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1918162 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:37.677 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:37.937 [2024-11-20 16:58:29.866527] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.937 Malloc3 00:15:37.937 16:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:37.937 [2024-11-20 16:58:30.059912] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.937 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.937 Asynchronous Event Request test 00:15:37.937 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.937 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.937 Registering asynchronous event callbacks... 00:15:37.937 Starting namespace attribute notice tests for all controllers... 00:15:37.937 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:37.937 aer_cb - Changed Namespace 00:15:37.937 Cleaning up... 00:15:38.198 [ 00:15:38.198 { 00:15:38.198 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:38.198 "subtype": "Discovery", 00:15:38.198 "listen_addresses": [], 00:15:38.198 "allow_any_host": true, 00:15:38.198 "hosts": [] 00:15:38.198 }, 00:15:38.198 { 00:15:38.198 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:38.198 "subtype": "NVMe", 00:15:38.198 "listen_addresses": [ 00:15:38.198 { 00:15:38.198 "trtype": "VFIOUSER", 00:15:38.198 "adrfam": "IPv4", 00:15:38.198 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:38.198 "trsvcid": "0" 00:15:38.198 } 00:15:38.198 ], 00:15:38.198 "allow_any_host": true, 00:15:38.198 "hosts": [], 00:15:38.198 "serial_number": "SPDK1", 00:15:38.198 "model_number": "SPDK bdev Controller", 00:15:38.198 "max_namespaces": 32, 00:15:38.198 "min_cntlid": 1, 00:15:38.198 "max_cntlid": 65519, 00:15:38.198 "namespaces": [ 00:15:38.198 { 00:15:38.198 "nsid": 1, 00:15:38.198 "bdev_name": "Malloc1", 00:15:38.198 "name": "Malloc1", 00:15:38.198 "nguid": "71B3F31FE1244647A902C6C221F5051E", 00:15:38.198 "uuid": "71b3f31f-e124-4647-a902-c6c221f5051e" 00:15:38.198 }, 00:15:38.198 { 00:15:38.198 "nsid": 2, 00:15:38.198 "bdev_name": "Malloc3", 00:15:38.198 "name": "Malloc3", 00:15:38.198 "nguid": "BDDA8CE7130546368B550FA9A975386F", 00:15:38.198 "uuid": "bdda8ce7-1305-4636-8b55-0fa9a975386f" 00:15:38.198 } 00:15:38.198 ] 00:15:38.198 }, 00:15:38.198 { 00:15:38.198 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:38.198 "subtype": "NVMe", 00:15:38.198 "listen_addresses": [ 00:15:38.198 { 00:15:38.198 "trtype": "VFIOUSER", 00:15:38.198 "adrfam": "IPv4", 00:15:38.198 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:38.198 "trsvcid": "0" 00:15:38.198 } 00:15:38.198 ], 00:15:38.198 "allow_any_host": true, 00:15:38.198 "hosts": [], 00:15:38.198 "serial_number": "SPDK2", 00:15:38.198 "model_number": "SPDK bdev Controller", 00:15:38.198 "max_namespaces": 32, 00:15:38.198 "min_cntlid": 1, 00:15:38.198 "max_cntlid": 65519, 00:15:38.198 "namespaces": [ 00:15:38.198 { 00:15:38.198 "nsid": 1, 00:15:38.198 "bdev_name": "Malloc2", 00:15:38.198 "name": "Malloc2", 00:15:38.198 "nguid": "2235033A52AE420189F329F40EB54BA5", 00:15:38.198 "uuid": "2235033a-52ae-4201-89f3-29f40eb54ba5" 00:15:38.198 } 00:15:38.198 ] 00:15:38.198 } 00:15:38.198 ] 00:15:38.198 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1918162 00:15:38.198 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.198 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:38.198 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:38.198 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:38.199 [2024-11-20 16:58:30.287207] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:15:38.199 [2024-11-20 16:58:30.287251] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918172 ] 00:15:38.199 [2024-11-20 16:58:30.326435] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:38.199 [2024-11-20 16:58:30.331632] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.199 [2024-11-20 16:58:30.331651] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9dad4bd000 00:15:38.199 [2024-11-20 16:58:30.332633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.333640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.334648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.335650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.336662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.337669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.338680] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.339684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.199 [2024-11-20 16:58:30.340694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.199 [2024-11-20 16:58:30.340701] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9dad4b2000 00:15:38.199 [2024-11-20 16:58:30.341612] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.199 [2024-11-20 16:58:30.350991] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:38.199 [2024-11-20 16:58:30.351014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:38.199 [2024-11-20 16:58:30.356082] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:38.199 [2024-11-20 16:58:30.356115] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:38.199 [2024-11-20 16:58:30.356179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:38.199 [2024-11-20 16:58:30.356192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:38.199 [2024-11-20 16:58:30.356196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:38.199 [2024-11-20 16:58:30.357087] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:38.199 [2024-11-20 16:58:30.357094] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:38.199 [2024-11-20 16:58:30.357099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:38.199 [2024-11-20 16:58:30.358094] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:38.199 [2024-11-20 16:58:30.358102] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:38.199 [2024-11-20 16:58:30.358111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:38.199 [2024-11-20 16:58:30.359101] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:38.199 [2024-11-20 16:58:30.359107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:38.199 [2024-11-20 16:58:30.360106] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:38.199 [2024-11-20 16:58:30.360113] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:38.199 [2024-11-20 16:58:30.360116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:38.199 [2024-11-20 16:58:30.360121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:38.199 [2024-11-20 16:58:30.360227] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:38.199 [2024-11-20 16:58:30.360231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:38.199 [2024-11-20 16:58:30.360235] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:38.199 [2024-11-20 16:58:30.361119] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:38.199 [2024-11-20 16:58:30.362120] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:38.199 [2024-11-20 16:58:30.363127] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:38.199 [2024-11-20 16:58:30.364133] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.199 [2024-11-20 16:58:30.364168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:38.199 [2024-11-20 16:58:30.365144] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:38.199 [2024-11-20 16:58:30.365150] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:38.199 [2024-11-20 16:58:30.365153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:38.199 [2024-11-20 16:58:30.365171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:38.199 [2024-11-20 16:58:30.365180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:38.199 [2024-11-20 16:58:30.365191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.199 [2024-11-20 16:58:30.365195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.199 [2024-11-20 16:58:30.365197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.199 [2024-11-20 16:58:30.365207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.373166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.373178] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:38.462 [2024-11-20 16:58:30.373182] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:38.462 [2024-11-20 16:58:30.373185] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:38.462 [2024-11-20 16:58:30.373188] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:38.462 [2024-11-20 16:58:30.373194] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:38.462 [2024-11-20 16:58:30.373197] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:38.462 [2024-11-20 16:58:30.373201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.373208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.373216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.381163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.381173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.462 [2024-11-20 16:58:30.381180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.462 [2024-11-20 16:58:30.381186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.462 [2024-11-20 16:58:30.381192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.462 [2024-11-20 16:58:30.381195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.381200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.381207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.389165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.389172] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:38.462 [2024-11-20 16:58:30.389176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.389182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.389187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.389193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.397164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.397213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.397219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.397225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:38.462 [2024-11-20 16:58:30.397228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:38.462 [2024-11-20 16:58:30.397231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.462 [2024-11-20 16:58:30.397235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.405164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.405173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:38.462 [2024-11-20 16:58:30.405180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.405185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.405190] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.462 [2024-11-20 16:58:30.405193] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.462 [2024-11-20 16:58:30.405196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.462 [2024-11-20 16:58:30.405200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.413164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.413177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.413183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.413189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.462 [2024-11-20 16:58:30.413192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.462 [2024-11-20 16:58:30.413194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.462 [2024-11-20 16:58:30.413198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.421163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.421171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421199] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:38.462 [2024-11-20 16:58:30.421203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:38.462 [2024-11-20 16:58:30.421206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:38.462 [2024-11-20 16:58:30.421220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.429163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.429174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.437163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.437172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.445162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.445171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.462 [2024-11-20 16:58:30.453162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:38.462 [2024-11-20 16:58:30.453175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:38.463 [2024-11-20 16:58:30.453178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:38.463 [2024-11-20 16:58:30.453181] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:38.463 [2024-11-20 16:58:30.453184] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:38.463 [2024-11-20 16:58:30.453186] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:38.463 [2024-11-20 16:58:30.453191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:38.463 [2024-11-20 16:58:30.453196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:38.463 [2024-11-20 16:58:30.453199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:38.463 [2024-11-20 16:58:30.453202] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.463 [2024-11-20 16:58:30.453206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:38.463 [2024-11-20 16:58:30.453211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:38.463 [2024-11-20 16:58:30.453214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.463 [2024-11-20 16:58:30.453217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.463 [2024-11-20 16:58:30.453221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.463 [2024-11-20 16:58:30.453228] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:38.463 [2024-11-20 16:58:30.453231] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:38.463 [2024-11-20 16:58:30.453234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.463 [2024-11-20 16:58:30.453238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:38.463 [2024-11-20 16:58:30.461163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:38.463 [2024-11-20 16:58:30.461173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:38.463 [2024-11-20 16:58:30.461181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:38.463 [2024-11-20 16:58:30.461186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:38.463 ===================================================== 00:15:38.463 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.463 ===================================================== 00:15:38.463 Controller Capabilities/Features 00:15:38.463 ================================ 00:15:38.463 Vendor ID: 4e58 00:15:38.463 Subsystem Vendor ID: 4e58 00:15:38.463 Serial Number: SPDK2 00:15:38.463 Model Number: SPDK bdev Controller 00:15:38.463 Firmware Version: 25.01 00:15:38.463 Recommended Arb Burst: 6 00:15:38.463 IEEE OUI Identifier: 8d 6b 50 00:15:38.463 Multi-path I/O 00:15:38.463 May have multiple subsystem ports: Yes 00:15:38.463 May have multiple controllers: Yes 00:15:38.463 Associated with SR-IOV VF: No 00:15:38.463 Max Data Transfer Size: 131072 00:15:38.463 Max Number of Namespaces: 32 00:15:38.463 Max Number of I/O Queues: 127 00:15:38.463 NVMe Specification Version (VS): 1.3 00:15:38.463 NVMe Specification Version (Identify): 1.3 00:15:38.463 Maximum Queue Entries: 256 00:15:38.463 Contiguous Queues Required: Yes 00:15:38.463 Arbitration Mechanisms Supported 00:15:38.463 Weighted Round Robin: Not Supported 00:15:38.463 Vendor Specific: Not Supported 00:15:38.463 Reset Timeout: 15000 ms 00:15:38.463 Doorbell Stride: 4 bytes 00:15:38.463 NVM Subsystem Reset: Not Supported 00:15:38.463 Command Sets Supported 00:15:38.463 NVM Command Set: Supported 00:15:38.463 Boot Partition: Not Supported 00:15:38.463 Memory Page Size Minimum: 4096 bytes 00:15:38.463 Memory Page Size Maximum: 4096 bytes 00:15:38.463 Persistent Memory Region: Not Supported 00:15:38.463 Optional Asynchronous Events Supported 00:15:38.463 Namespace Attribute Notices: Supported 00:15:38.463 Firmware Activation Notices: Not Supported 00:15:38.463 ANA Change Notices: Not Supported 00:15:38.463 PLE Aggregate Log Change Notices: Not Supported 00:15:38.463 LBA Status Info Alert Notices: Not Supported 00:15:38.463 EGE Aggregate Log Change Notices: Not Supported 00:15:38.463 Normal NVM Subsystem Shutdown event: Not Supported 00:15:38.463 Zone Descriptor Change Notices: Not Supported 00:15:38.463 Discovery Log Change Notices: Not Supported 00:15:38.463 Controller Attributes 00:15:38.463 128-bit Host Identifier: Supported 00:15:38.463 Non-Operational Permissive Mode: Not Supported 00:15:38.463 NVM Sets: Not Supported 00:15:38.463 Read Recovery Levels: Not Supported 00:15:38.463 Endurance Groups: Not Supported 00:15:38.463 Predictable Latency Mode: Not Supported 00:15:38.463 Traffic Based Keep ALive: Not Supported 00:15:38.463 Namespace Granularity: Not Supported 00:15:38.463 SQ Associations: Not Supported 00:15:38.463 UUID List: Not Supported 00:15:38.463 Multi-Domain Subsystem: Not Supported 00:15:38.463 Fixed Capacity Management: Not Supported 00:15:38.463 Variable Capacity Management: Not Supported 00:15:38.463 Delete Endurance Group: Not Supported 00:15:38.463 Delete NVM Set: Not Supported 00:15:38.463 Extended LBA Formats Supported: Not Supported 00:15:38.463 Flexible Data Placement Supported: Not Supported 00:15:38.463 00:15:38.463 Controller Memory Buffer Support 00:15:38.463 ================================ 00:15:38.463 Supported: No 00:15:38.463 00:15:38.463 Persistent Memory Region Support 00:15:38.463 ================================ 00:15:38.463 Supported: No 00:15:38.463 00:15:38.463 Admin Command Set Attributes 00:15:38.463 ============================ 00:15:38.463 Security Send/Receive: Not Supported 00:15:38.463 Format NVM: Not Supported 00:15:38.463 Firmware Activate/Download: Not Supported 00:15:38.463 Namespace Management: Not Supported 00:15:38.463 Device Self-Test: Not Supported 00:15:38.463 Directives: Not Supported 00:15:38.463 NVMe-MI: Not Supported 00:15:38.463 Virtualization Management: Not Supported 00:15:38.463 Doorbell Buffer Config: Not Supported 00:15:38.463 Get LBA Status Capability: Not Supported 00:15:38.463 Command & Feature Lockdown Capability: Not Supported 00:15:38.463 Abort Command Limit: 4 00:15:38.463 Async Event Request Limit: 4 00:15:38.463 Number of Firmware Slots: N/A 00:15:38.463 Firmware Slot 1 Read-Only: N/A 00:15:38.463 Firmware Activation Without Reset: N/A 00:15:38.463 Multiple Update Detection Support: N/A 00:15:38.463 Firmware Update Granularity: No Information Provided 00:15:38.463 Per-Namespace SMART Log: No 00:15:38.463 Asymmetric Namespace Access Log Page: Not Supported 00:15:38.463 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:38.463 Command Effects Log Page: Supported 00:15:38.463 Get Log Page Extended Data: Supported 00:15:38.463 Telemetry Log Pages: Not Supported 00:15:38.463 Persistent Event Log Pages: Not Supported 00:15:38.463 Supported Log Pages Log Page: May Support 00:15:38.463 Commands Supported & Effects Log Page: Not Supported 00:15:38.463 Feature Identifiers & Effects Log Page:May Support 00:15:38.463 NVMe-MI Commands & Effects Log Page: May Support 00:15:38.463 Data Area 4 for Telemetry Log: Not Supported 00:15:38.463 Error Log Page Entries Supported: 128 00:15:38.463 Keep Alive: Supported 00:15:38.463 Keep Alive Granularity: 10000 ms 00:15:38.463 00:15:38.463 NVM Command Set Attributes 00:15:38.463 ========================== 00:15:38.463 Submission Queue Entry Size 00:15:38.463 Max: 64 00:15:38.463 Min: 64 00:15:38.463 Completion Queue Entry Size 00:15:38.463 Max: 16 00:15:38.463 Min: 16 00:15:38.463 Number of Namespaces: 32 00:15:38.463 Compare Command: Supported 00:15:38.463 Write Uncorrectable Command: Not Supported 00:15:38.463 Dataset Management Command: Supported 00:15:38.463 Write Zeroes Command: Supported 00:15:38.463 Set Features Save Field: Not Supported 00:15:38.463 Reservations: Not Supported 00:15:38.463 Timestamp: Not Supported 00:15:38.463 Copy: Supported 00:15:38.463 Volatile Write Cache: Present 00:15:38.463 Atomic Write Unit (Normal): 1 00:15:38.463 Atomic Write Unit (PFail): 1 00:15:38.463 Atomic Compare & Write Unit: 1 00:15:38.463 Fused Compare & Write: Supported 00:15:38.463 Scatter-Gather List 00:15:38.463 SGL Command Set: Supported (Dword aligned) 00:15:38.463 SGL Keyed: Not Supported 00:15:38.463 SGL Bit Bucket Descriptor: Not Supported 00:15:38.463 SGL Metadata Pointer: Not Supported 00:15:38.463 Oversized SGL: Not Supported 00:15:38.463 SGL Metadata Address: Not Supported 00:15:38.463 SGL Offset: Not Supported 00:15:38.463 Transport SGL Data Block: Not Supported 00:15:38.463 Replay Protected Memory Block: Not Supported 00:15:38.463 00:15:38.463 Firmware Slot Information 00:15:38.463 ========================= 00:15:38.463 Active slot: 1 00:15:38.463 Slot 1 Firmware Revision: 25.01 00:15:38.463 00:15:38.463 00:15:38.463 Commands Supported and Effects 00:15:38.463 ============================== 00:15:38.463 Admin Commands 00:15:38.464 -------------- 00:15:38.464 Get Log Page (02h): Supported 00:15:38.464 Identify (06h): Supported 00:15:38.464 Abort (08h): Supported 00:15:38.464 Set Features (09h): Supported 00:15:38.464 Get Features (0Ah): Supported 00:15:38.464 Asynchronous Event Request (0Ch): Supported 00:15:38.464 Keep Alive (18h): Supported 00:15:38.464 I/O Commands 00:15:38.464 ------------ 00:15:38.464 Flush (00h): Supported LBA-Change 00:15:38.464 Write (01h): Supported LBA-Change 00:15:38.464 Read (02h): Supported 00:15:38.464 Compare (05h): Supported 00:15:38.464 Write Zeroes (08h): Supported LBA-Change 00:15:38.464 Dataset Management (09h): Supported LBA-Change 00:15:38.464 Copy (19h): Supported LBA-Change 00:15:38.464 00:15:38.464 Error Log 00:15:38.464 ========= 00:15:38.464 00:15:38.464 Arbitration 00:15:38.464 =========== 00:15:38.464 Arbitration Burst: 1 00:15:38.464 00:15:38.464 Power Management 00:15:38.464 ================ 00:15:38.464 Number of Power States: 1 00:15:38.464 Current Power State: Power State #0 00:15:38.464 Power State #0: 00:15:38.464 Max Power: 0.00 W 00:15:38.464 Non-Operational State: Operational 00:15:38.464 Entry Latency: Not Reported 00:15:38.464 Exit Latency: Not Reported 00:15:38.464 Relative Read Throughput: 0 00:15:38.464 Relative Read Latency: 0 00:15:38.464 Relative Write Throughput: 0 00:15:38.464 Relative Write Latency: 0 00:15:38.464 Idle Power: Not Reported 00:15:38.464 Active Power: Not Reported 00:15:38.464 Non-Operational Permissive Mode: Not Supported 00:15:38.464 00:15:38.464 Health Information 00:15:38.464 ================== 00:15:38.464 Critical Warnings: 00:15:38.464 Available Spare Space: OK 00:15:38.464 Temperature: OK 00:15:38.464 Device Reliability: OK 00:15:38.464 Read Only: No 00:15:38.464 Volatile Memory Backup: OK 00:15:38.464 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:38.464 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:38.464 Available Spare: 0% 00:15:38.464 Available Sp[2024-11-20 16:58:30.461260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:38.464 [2024-11-20 16:58:30.469164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:38.464 [2024-11-20 16:58:30.469194] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:38.464 [2024-11-20 16:58:30.469201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.464 [2024-11-20 16:58:30.469206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.464 [2024-11-20 16:58:30.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.464 [2024-11-20 16:58:30.469215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.464 [2024-11-20 16:58:30.469247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:38.464 [2024-11-20 16:58:30.469256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:38.464 [2024-11-20 16:58:30.470256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.464 [2024-11-20 16:58:30.470295] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:38.464 [2024-11-20 16:58:30.470300] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:38.464 [2024-11-20 16:58:30.471263] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:38.464 [2024-11-20 16:58:30.471272] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:38.464 [2024-11-20 16:58:30.471319] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:38.464 [2024-11-20 16:58:30.472289] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.464 are Threshold: 0% 00:15:38.464 Life Percentage Used: 0% 00:15:38.464 Data Units Read: 0 00:15:38.464 Data Units Written: 0 00:15:38.464 Host Read Commands: 0 00:15:38.464 Host Write Commands: 0 00:15:38.464 Controller Busy Time: 0 minutes 00:15:38.464 Power Cycles: 0 00:15:38.464 Power On Hours: 0 hours 00:15:38.464 Unsafe Shutdowns: 0 00:15:38.464 Unrecoverable Media Errors: 0 00:15:38.464 Lifetime Error Log Entries: 0 00:15:38.464 Warning Temperature Time: 0 minutes 00:15:38.464 Critical Temperature Time: 0 minutes 00:15:38.464 00:15:38.464 Number of Queues 00:15:38.464 ================ 00:15:38.464 Number of I/O Submission Queues: 127 00:15:38.464 Number of I/O Completion Queues: 127 00:15:38.464 00:15:38.464 Active Namespaces 00:15:38.464 ================= 00:15:38.464 Namespace ID:1 00:15:38.464 Error Recovery Timeout: Unlimited 00:15:38.464 Command Set Identifier: NVM (00h) 00:15:38.464 Deallocate: Supported 00:15:38.464 Deallocated/Unwritten Error: Not Supported 00:15:38.464 Deallocated Read Value: Unknown 00:15:38.464 Deallocate in Write Zeroes: Not Supported 00:15:38.464 Deallocated Guard Field: 0xFFFF 00:15:38.464 Flush: Supported 00:15:38.464 Reservation: Supported 00:15:38.464 Namespace Sharing Capabilities: Multiple Controllers 00:15:38.464 Size (in LBAs): 131072 (0GiB) 00:15:38.464 Capacity (in LBAs): 131072 (0GiB) 00:15:38.464 Utilization (in LBAs): 131072 (0GiB) 00:15:38.464 NGUID: 2235033A52AE420189F329F40EB54BA5 00:15:38.464 UUID: 2235033a-52ae-4201-89f3-29f40eb54ba5 00:15:38.464 Thin Provisioning: Not Supported 00:15:38.464 Per-NS Atomic Units: Yes 00:15:38.464 Atomic Boundary Size (Normal): 0 00:15:38.464 Atomic Boundary Size (PFail): 0 00:15:38.464 Atomic Boundary Offset: 0 00:15:38.464 Maximum Single Source Range Length: 65535 00:15:38.464 Maximum Copy Length: 65535 00:15:38.464 Maximum Source Range Count: 1 00:15:38.464 NGUID/EUI64 Never Reused: No 00:15:38.464 Namespace Write Protected: No 00:15:38.464 Number of LBA Formats: 1 00:15:38.464 Current LBA Format: LBA Format #00 00:15:38.464 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:38.464 00:15:38.464 16:58:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:38.725 [2024-11-20 16:58:30.668227] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.006 Initializing NVMe Controllers 00:15:44.006 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.006 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:44.006 Initialization complete. Launching workers. 00:15:44.006 ======================================================== 00:15:44.006 Latency(us) 00:15:44.006 Device Information : IOPS MiB/s Average min max 00:15:44.006 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40064.60 156.50 3197.21 846.73 8422.66 00:15:44.006 ======================================================== 00:15:44.006 Total : 40064.60 156.50 3197.21 846.73 8422.66 00:15:44.006 00:15:44.006 [2024-11-20 16:58:35.778359] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.006 16:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:44.006 [2024-11-20 16:58:35.967951] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.283 Initializing NVMe Controllers 00:15:49.283 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.283 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:49.283 Initialization complete. Launching workers. 00:15:49.283 ======================================================== 00:15:49.283 Latency(us) 00:15:49.283 Device Information : IOPS MiB/s Average min max 00:15:49.283 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39980.40 156.17 3202.12 862.08 10973.76 00:15:49.283 ======================================================== 00:15:49.283 Total : 39980.40 156.17 3202.12 862.08 10973.76 00:15:49.283 00:15:49.283 [2024-11-20 16:58:40.988764] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.283 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:49.283 [2024-11-20 16:58:41.200966] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.567 [2024-11-20 16:58:46.337252] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.567 Initializing NVMe Controllers 00:15:54.567 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.567 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.567 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:54.567 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:54.567 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:54.567 Initialization complete. Launching workers. 00:15:54.567 Starting thread on core 2 00:15:54.567 Starting thread on core 3 00:15:54.567 Starting thread on core 1 00:15:54.567 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:54.567 [2024-11-20 16:58:46.588193] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.763 [2024-11-20 16:58:50.284306] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.763 Initializing NVMe Controllers 00:15:58.763 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.763 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.763 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:58.763 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:58.763 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:58.763 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:58.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:58.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:58.763 Initialization complete. Launching workers. 00:15:58.763 Starting thread on core 1 with urgent priority queue 00:15:58.763 Starting thread on core 2 with urgent priority queue 00:15:58.763 Starting thread on core 3 with urgent priority queue 00:15:58.763 Starting thread on core 0 with urgent priority queue 00:15:58.763 SPDK bdev Controller (SPDK2 ) core 0: 5412.33 IO/s 18.48 secs/100000 ios 00:15:58.763 SPDK bdev Controller (SPDK2 ) core 1: 4091.67 IO/s 24.44 secs/100000 ios 00:15:58.763 SPDK bdev Controller (SPDK2 ) core 2: 5783.00 IO/s 17.29 secs/100000 ios 00:15:58.763 SPDK bdev Controller (SPDK2 ) core 3: 4378.67 IO/s 22.84 secs/100000 ios 00:15:58.763 ======================================================== 00:15:58.763 00:15:58.763 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:58.763 [2024-11-20 16:58:50.522056] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.763 Initializing NVMe Controllers 00:15:58.763 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.763 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.763 Namespace ID: 1 size: 0GB 00:15:58.763 Initialization complete. 00:15:58.763 INFO: using host memory buffer for IO 00:15:58.763 Hello world! 00:15:58.763 [2024-11-20 16:58:50.531126] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.764 16:58:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:58.764 [2024-11-20 16:58:50.759423] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.704 Initializing NVMe Controllers 00:15:59.704 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.704 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.704 Initialization complete. Launching workers. 00:15:59.704 submit (in ns) avg, min, max = 6112.0, 2812.5, 4005714.2 00:15:59.704 complete (in ns) avg, min, max = 16476.9, 1653.3, 4002748.3 00:15:59.704 00:15:59.704 Submit histogram 00:15:59.704 ================ 00:15:59.704 Range in us Cumulative Count 00:15:59.704 2.800 - 2.813: 0.0049% ( 1) 00:15:59.704 2.813 - 2.827: 0.8117% ( 164) 00:15:59.704 2.827 - 2.840: 2.2383% ( 290) 00:15:59.704 2.840 - 2.853: 4.7471% ( 510) 00:15:59.704 2.853 - 2.867: 10.0453% ( 1077) 00:15:59.704 2.867 - 2.880: 15.7713% ( 1164) 00:15:59.704 2.880 - 2.893: 21.4286% ( 1150) 00:15:59.704 2.893 - 2.907: 26.7316% ( 1078) 00:15:59.704 2.907 - 2.920: 32.0887% ( 1089) 00:15:59.704 2.920 - 2.933: 37.6968% ( 1140) 00:15:59.704 2.933 - 2.947: 42.6800% ( 1013) 00:15:59.704 2.947 - 2.960: 48.8489% ( 1254) 00:15:59.704 2.960 - 2.973: 54.7717% ( 1204) 00:15:59.704 2.973 - 2.987: 63.2674% ( 1727) 00:15:59.704 2.987 - 3.000: 71.6942% ( 1713) 00:15:59.704 3.000 - 3.013: 79.5307% ( 1593) 00:15:59.704 3.013 - 3.027: 86.3538% ( 1387) 00:15:59.704 3.027 - 3.040: 91.8339% ( 1114) 00:15:59.704 3.040 - 3.053: 95.7399% ( 794) 00:15:59.704 3.053 - 3.067: 97.7224% ( 403) 00:15:59.704 3.067 - 3.080: 98.6816% ( 195) 00:15:59.704 3.080 - 3.093: 99.1686% ( 99) 00:15:59.704 3.093 - 3.107: 99.3752% ( 42) 00:15:59.704 3.107 - 3.120: 99.4638% ( 18) 00:15:59.704 3.120 - 3.133: 99.4982% ( 7) 00:15:59.704 3.133 - 3.147: 99.5277% ( 6) 00:15:59.704 3.147 - 3.160: 99.5425% ( 3) 00:15:59.704 3.160 - 3.173: 99.5474% ( 1) 00:15:59.704 3.173 - 3.187: 99.5523% ( 1) 00:15:59.704 3.187 - 3.200: 99.5622% ( 2) 00:15:59.704 3.227 - 3.240: 99.5671% ( 1) 00:15:59.704 3.253 - 3.267: 99.5720% ( 1) 00:15:59.704 3.293 - 3.307: 99.5868% ( 3) 00:15:59.704 3.347 - 3.360: 99.5917% ( 1) 00:15:59.704 3.360 - 3.373: 99.5966% ( 1) 00:15:59.704 3.373 - 3.387: 99.6015% ( 1) 00:15:59.704 3.520 - 3.547: 99.6065% ( 1) 00:15:59.704 3.547 - 3.573: 99.6114% ( 1) 00:15:59.704 3.733 - 3.760: 99.6163% ( 1) 00:15:59.704 3.760 - 3.787: 99.6212% ( 1) 00:15:59.704 3.893 - 3.920: 99.6261% ( 1) 00:15:59.704 4.160 - 4.187: 99.6311% ( 1) 00:15:59.704 4.427 - 4.453: 99.6360% ( 1) 00:15:59.704 4.560 - 4.587: 99.6409% ( 1) 00:15:59.704 4.587 - 4.613: 99.6458% ( 1) 00:15:59.704 4.667 - 4.693: 99.6507% ( 1) 00:15:59.704 4.907 - 4.933: 99.6556% ( 1) 00:15:59.704 5.013 - 5.040: 99.6606% ( 1) 00:15:59.704 5.040 - 5.067: 99.6655% ( 1) 00:15:59.704 5.093 - 5.120: 99.6704% ( 1) 00:15:59.704 5.280 - 5.307: 99.6753% ( 1) 00:15:59.704 5.307 - 5.333: 99.6802% ( 1) 00:15:59.704 5.493 - 5.520: 99.6852% ( 1) 00:15:59.704 5.600 - 5.627: 99.6901% ( 1) 00:15:59.704 5.707 - 5.733: 99.6950% ( 1) 00:15:59.704 5.733 - 5.760: 99.7048% ( 2) 00:15:59.704 5.760 - 5.787: 99.7098% ( 1) 00:15:59.704 5.813 - 5.840: 99.7147% ( 1) 00:15:59.704 5.840 - 5.867: 99.7245% ( 2) 00:15:59.704 5.893 - 5.920: 99.7294% ( 1) 00:15:59.704 5.920 - 5.947: 99.7393% ( 2) 00:15:59.704 5.947 - 5.973: 99.7442% ( 1) 00:15:59.704 6.000 - 6.027: 99.7491% ( 1) 00:15:59.704 6.080 - 6.107: 99.7540% ( 1) 00:15:59.704 6.107 - 6.133: 99.7590% ( 1) 00:15:59.704 6.133 - 6.160: 99.7639% ( 1) 00:15:59.704 6.160 - 6.187: 99.7688% ( 1) 00:15:59.704 6.187 - 6.213: 99.7737% ( 1) 00:15:59.704 6.293 - 6.320: 99.7786% ( 1) 00:15:59.704 6.320 - 6.347: 99.7885% ( 2) 00:15:59.704 6.347 - 6.373: 99.7983% ( 2) 00:15:59.704 6.373 - 6.400: 99.8032% ( 1) 00:15:59.704 6.453 - 6.480: 99.8180% ( 3) 00:15:59.704 6.507 - 6.533: 99.8229% ( 1) 00:15:59.704 6.533 - 6.560: 99.8377% ( 3) 00:15:59.704 6.560 - 6.587: 99.8426% ( 1) 00:15:59.704 6.613 - 6.640: 99.8475% ( 1) 00:15:59.704 6.640 - 6.667: 99.8524% ( 1) 00:15:59.704 6.720 - 6.747: 99.8573% ( 1) 00:15:59.704 [2024-11-20 16:58:51.854685] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.964 6.880 - 6.933: 99.8672% ( 2) 00:15:59.964 7.040 - 7.093: 99.8721% ( 1) 00:15:59.964 7.147 - 7.200: 99.8770% ( 1) 00:15:59.964 7.307 - 7.360: 99.8819% ( 1) 00:15:59.964 7.360 - 7.413: 99.8869% ( 1) 00:15:59.964 7.680 - 7.733: 99.8918% ( 1) 00:15:59.964 7.733 - 7.787: 99.9016% ( 2) 00:15:59.964 8.000 - 8.053: 99.9065% ( 1) 00:15:59.964 8.267 - 8.320: 99.9115% ( 1) 00:15:59.964 8.693 - 8.747: 99.9164% ( 1) 00:15:59.964 58.453 - 58.880: 99.9213% ( 1) 00:15:59.964 3986.773 - 4014.080: 100.0000% ( 16) 00:15:59.964 00:15:59.964 Complete histogram 00:15:59.964 ================== 00:15:59.964 Range in us Cumulative Count 00:15:59.964 1.653 - 1.660: 0.6444% ( 131) 00:15:59.964 1.660 - 1.667: 0.7527% ( 22) 00:15:59.964 1.667 - 1.673: 0.7723% ( 4) 00:15:59.964 1.673 - 1.680: 0.8904% ( 24) 00:15:59.964 1.680 - 1.687: 0.9888% ( 20) 00:15:59.964 1.687 - 1.693: 1.0380% ( 10) 00:15:59.964 1.693 - 1.700: 1.0577% ( 4) 00:15:59.964 1.700 - 1.707: 1.0823% ( 5) 00:15:59.964 1.707 - 1.720: 56.1049% ( 11185) 00:15:59.964 1.720 - 1.733: 71.4925% ( 3128) 00:15:59.964 1.733 - 1.747: 80.1948% ( 1769) 00:15:59.964 1.747 - 1.760: 83.0185% ( 574) 00:15:59.964 1.760 - 1.773: 85.8078% ( 567) 00:15:59.964 1.773 - 1.787: 91.0813% ( 1072) 00:15:59.964 1.787 - 1.800: 96.0203% ( 1004) 00:15:59.964 1.800 - 1.813: 98.4012% ( 484) 00:15:59.964 1.813 - 1.827: 99.2473% ( 172) 00:15:59.964 1.827 - 1.840: 99.4343% ( 38) 00:15:59.964 1.840 - 1.853: 99.4589% ( 5) 00:15:59.964 1.853 - 1.867: 99.4638% ( 1) 00:15:59.964 1.867 - 1.880: 99.4736% ( 2) 00:15:59.964 1.880 - 1.893: 99.4786% ( 1) 00:15:59.964 2.000 - 2.013: 99.4835% ( 1) 00:15:59.964 3.707 - 3.733: 99.4884% ( 1) 00:15:59.964 4.160 - 4.187: 99.4933% ( 1) 00:15:59.964 4.347 - 4.373: 99.4982% ( 1) 00:15:59.964 4.373 - 4.400: 99.5031% ( 1) 00:15:59.964 4.400 - 4.427: 99.5130% ( 2) 00:15:59.964 4.560 - 4.587: 99.5179% ( 1) 00:15:59.964 4.613 - 4.640: 99.5228% ( 1) 00:15:59.964 4.747 - 4.773: 99.5376% ( 3) 00:15:59.964 4.800 - 4.827: 99.5425% ( 1) 00:15:59.964 4.853 - 4.880: 99.5474% ( 1) 00:15:59.964 4.907 - 4.933: 99.5573% ( 2) 00:15:59.964 5.040 - 5.067: 99.5622% ( 1) 00:15:59.964 5.120 - 5.147: 99.5671% ( 1) 00:15:59.964 5.147 - 5.173: 99.5769% ( 2) 00:15:59.964 5.280 - 5.307: 99.5868% ( 2) 00:15:59.964 5.467 - 5.493: 99.5917% ( 1) 00:15:59.965 5.547 - 5.573: 99.5966% ( 1) 00:15:59.965 5.680 - 5.707: 99.6065% ( 2) 00:15:59.965 5.733 - 5.760: 99.6114% ( 1) 00:15:59.965 6.347 - 6.373: 99.6163% ( 1) 00:15:59.965 6.560 - 6.587: 99.6212% ( 1) 00:15:59.965 7.093 - 7.147: 99.6261% ( 1) 00:15:59.965 50.987 - 51.200: 99.6311% ( 1) 00:15:59.965 3986.773 - 4014.080: 100.0000% ( 75) 00:15:59.965 00:15:59.965 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:59.965 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:59.965 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:59.965 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:59.965 16:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:59.965 [ 00:15:59.965 { 00:15:59.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.965 "subtype": "Discovery", 00:15:59.965 "listen_addresses": [], 00:15:59.965 "allow_any_host": true, 00:15:59.965 "hosts": [] 00:15:59.965 }, 00:15:59.965 { 00:15:59.965 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:59.965 "subtype": "NVMe", 00:15:59.965 "listen_addresses": [ 00:15:59.965 { 00:15:59.965 "trtype": "VFIOUSER", 00:15:59.965 "adrfam": "IPv4", 00:15:59.965 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:59.965 "trsvcid": "0" 00:15:59.965 } 00:15:59.965 ], 00:15:59.965 "allow_any_host": true, 00:15:59.965 "hosts": [], 00:15:59.965 "serial_number": "SPDK1", 00:15:59.965 "model_number": "SPDK bdev Controller", 00:15:59.965 "max_namespaces": 32, 00:15:59.965 "min_cntlid": 1, 00:15:59.965 "max_cntlid": 65519, 00:15:59.965 "namespaces": [ 00:15:59.965 { 00:15:59.965 "nsid": 1, 00:15:59.965 "bdev_name": "Malloc1", 00:15:59.965 "name": "Malloc1", 00:15:59.965 "nguid": "71B3F31FE1244647A902C6C221F5051E", 00:15:59.965 "uuid": "71b3f31f-e124-4647-a902-c6c221f5051e" 00:15:59.965 }, 00:15:59.965 { 00:15:59.965 "nsid": 2, 00:15:59.965 "bdev_name": "Malloc3", 00:15:59.965 "name": "Malloc3", 00:15:59.965 "nguid": "BDDA8CE7130546368B550FA9A975386F", 00:15:59.965 "uuid": "bdda8ce7-1305-4636-8b55-0fa9a975386f" 00:15:59.965 } 00:15:59.965 ] 00:15:59.965 }, 00:15:59.965 { 00:15:59.965 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:59.965 "subtype": "NVMe", 00:15:59.965 "listen_addresses": [ 00:15:59.965 { 00:15:59.965 "trtype": "VFIOUSER", 00:15:59.965 "adrfam": "IPv4", 00:15:59.965 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:59.965 "trsvcid": "0" 00:15:59.965 } 00:15:59.965 ], 00:15:59.965 "allow_any_host": true, 00:15:59.965 "hosts": [], 00:15:59.965 "serial_number": "SPDK2", 00:15:59.965 "model_number": "SPDK bdev Controller", 00:15:59.965 "max_namespaces": 32, 00:15:59.965 "min_cntlid": 1, 00:15:59.965 "max_cntlid": 65519, 00:15:59.965 "namespaces": [ 00:15:59.965 { 00:15:59.965 "nsid": 1, 00:15:59.965 "bdev_name": "Malloc2", 00:15:59.965 "name": "Malloc2", 00:15:59.965 "nguid": "2235033A52AE420189F329F40EB54BA5", 00:15:59.965 "uuid": "2235033a-52ae-4201-89f3-29f40eb54ba5" 00:15:59.965 } 00:15:59.965 ] 00:15:59.965 } 00:15:59.965 ] 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1922535 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:59.965 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:00.225 [2024-11-20 16:58:52.239020] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.225 Malloc4 00:16:00.225 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:00.485 [2024-11-20 16:58:52.425329] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.485 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.485 Asynchronous Event Request test 00:16:00.485 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:00.485 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:00.485 Registering asynchronous event callbacks... 00:16:00.485 Starting namespace attribute notice tests for all controllers... 00:16:00.485 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:00.485 aer_cb - Changed Namespace 00:16:00.485 Cleaning up... 00:16:00.485 [ 00:16:00.485 { 00:16:00.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.485 "subtype": "Discovery", 00:16:00.485 "listen_addresses": [], 00:16:00.485 "allow_any_host": true, 00:16:00.485 "hosts": [] 00:16:00.485 }, 00:16:00.485 { 00:16:00.485 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.485 "subtype": "NVMe", 00:16:00.485 "listen_addresses": [ 00:16:00.485 { 00:16:00.485 "trtype": "VFIOUSER", 00:16:00.485 "adrfam": "IPv4", 00:16:00.485 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.485 "trsvcid": "0" 00:16:00.485 } 00:16:00.485 ], 00:16:00.485 "allow_any_host": true, 00:16:00.485 "hosts": [], 00:16:00.485 "serial_number": "SPDK1", 00:16:00.485 "model_number": "SPDK bdev Controller", 00:16:00.485 "max_namespaces": 32, 00:16:00.485 "min_cntlid": 1, 00:16:00.485 "max_cntlid": 65519, 00:16:00.485 "namespaces": [ 00:16:00.485 { 00:16:00.485 "nsid": 1, 00:16:00.485 "bdev_name": "Malloc1", 00:16:00.485 "name": "Malloc1", 00:16:00.485 "nguid": "71B3F31FE1244647A902C6C221F5051E", 00:16:00.485 "uuid": "71b3f31f-e124-4647-a902-c6c221f5051e" 00:16:00.485 }, 00:16:00.485 { 00:16:00.485 "nsid": 2, 00:16:00.485 "bdev_name": "Malloc3", 00:16:00.485 "name": "Malloc3", 00:16:00.485 "nguid": "BDDA8CE7130546368B550FA9A975386F", 00:16:00.485 "uuid": "bdda8ce7-1305-4636-8b55-0fa9a975386f" 00:16:00.485 } 00:16:00.485 ] 00:16:00.485 }, 00:16:00.485 { 00:16:00.485 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.485 "subtype": "NVMe", 00:16:00.485 "listen_addresses": [ 00:16:00.485 { 00:16:00.485 "trtype": "VFIOUSER", 00:16:00.485 "adrfam": "IPv4", 00:16:00.485 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.485 "trsvcid": "0" 00:16:00.485 } 00:16:00.485 ], 00:16:00.485 "allow_any_host": true, 00:16:00.485 "hosts": [], 00:16:00.485 "serial_number": "SPDK2", 00:16:00.485 "model_number": "SPDK bdev Controller", 00:16:00.485 "max_namespaces": 32, 00:16:00.485 "min_cntlid": 1, 00:16:00.485 "max_cntlid": 65519, 00:16:00.485 "namespaces": [ 00:16:00.485 { 00:16:00.485 "nsid": 1, 00:16:00.485 "bdev_name": "Malloc2", 00:16:00.485 "name": "Malloc2", 00:16:00.485 "nguid": "2235033A52AE420189F329F40EB54BA5", 00:16:00.485 "uuid": "2235033a-52ae-4201-89f3-29f40eb54ba5" 00:16:00.485 }, 00:16:00.485 { 00:16:00.485 "nsid": 2, 00:16:00.485 "bdev_name": "Malloc4", 00:16:00.485 "name": "Malloc4", 00:16:00.486 "nguid": "596DA52E0F6F429D9AE2C2908B7A7DDC", 00:16:00.486 "uuid": "596da52e-0f6f-429d-9ae2-c2908b7a7ddc" 00:16:00.486 } 00:16:00.486 ] 00:16:00.486 } 00:16:00.486 ] 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1922535 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1913434 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1913434 ']' 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1913434 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.486 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913434 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913434' 00:16:00.745 killing process with pid 1913434 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1913434 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1913434 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1922561 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1922561' 00:16:00.745 Process pid: 1922561 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1922561 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1922561 ']' 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.745 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:00.746 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.746 16:58:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:00.746 [2024-11-20 16:58:52.895485] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:00.746 [2024-11-20 16:58:52.896402] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:16:00.746 [2024-11-20 16:58:52.896444] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.005 [2024-11-20 16:58:52.979932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.005 [2024-11-20 16:58:53.008873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.005 [2024-11-20 16:58:53.008905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.005 [2024-11-20 16:58:53.008910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.005 [2024-11-20 16:58:53.008915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.005 [2024-11-20 16:58:53.008920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.005 [2024-11-20 16:58:53.010176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.005 [2024-11-20 16:58:53.010321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.005 [2024-11-20 16:58:53.010551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.005 [2024-11-20 16:58:53.010555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.005 [2024-11-20 16:58:53.061953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:01.005 [2024-11-20 16:58:53.063018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:01.005 [2024-11-20 16:58:53.063973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:01.005 [2024-11-20 16:58:53.064489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:01.006 [2024-11-20 16:58:53.064508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:01.576 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.576 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:01.576 16:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:02.956 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:02.956 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:02.956 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:02.956 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:02.956 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:02.956 16:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.956 Malloc1 00:16:03.229 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:03.229 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:03.489 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:03.748 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:03.748 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:03.748 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:03.748 Malloc2 00:16:04.008 16:58:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:04.008 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:04.267 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1922561 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1922561 ']' 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1922561 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922561 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.527 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922561' 00:16:04.527 killing process with pid 1922561 00:16:04.528 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1922561 00:16:04.528 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1922561 00:16:04.528 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:04.528 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:04.528 00:16:04.528 real 0m51.617s 00:16:04.528 user 3m17.754s 00:16:04.528 sys 0m2.727s 00:16:04.528 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.528 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:04.528 ************************************ 00:16:04.528 END TEST nvmf_vfio_user 00:16:04.528 ************************************ 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.788 ************************************ 00:16:04.788 START TEST nvmf_vfio_user_nvme_compliance 00:16:04.788 ************************************ 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:04.788 * Looking for test storage... 00:16:04.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.788 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.789 --rc genhtml_branch_coverage=1 00:16:04.789 --rc genhtml_function_coverage=1 00:16:04.789 --rc genhtml_legend=1 00:16:04.789 --rc geninfo_all_blocks=1 00:16:04.789 --rc geninfo_unexecuted_blocks=1 00:16:04.789 00:16:04.789 ' 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.789 --rc genhtml_branch_coverage=1 00:16:04.789 --rc genhtml_function_coverage=1 00:16:04.789 --rc genhtml_legend=1 00:16:04.789 --rc geninfo_all_blocks=1 00:16:04.789 --rc geninfo_unexecuted_blocks=1 00:16:04.789 00:16:04.789 ' 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.789 --rc genhtml_branch_coverage=1 00:16:04.789 --rc genhtml_function_coverage=1 00:16:04.789 --rc genhtml_legend=1 00:16:04.789 --rc geninfo_all_blocks=1 00:16:04.789 --rc geninfo_unexecuted_blocks=1 00:16:04.789 00:16:04.789 ' 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.789 --rc genhtml_branch_coverage=1 00:16:04.789 --rc genhtml_function_coverage=1 00:16:04.789 --rc genhtml_legend=1 00:16:04.789 --rc geninfo_all_blocks=1 00:16:04.789 --rc geninfo_unexecuted_blocks=1 00:16:04.789 00:16:04.789 ' 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.789 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.049 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.049 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.049 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.049 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.049 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1923452 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1923452' 00:16:05.050 Process pid: 1923452 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1923452 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1923452 ']' 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.050 16:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.050 [2024-11-20 16:58:57.050371] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:16:05.050 [2024-11-20 16:58:57.050454] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.050 [2024-11-20 16:58:57.137212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.050 [2024-11-20 16:58:57.179246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.050 [2024-11-20 16:58:57.179282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.050 [2024-11-20 16:58:57.179288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.050 [2024-11-20 16:58:57.179293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.050 [2024-11-20 16:58:57.179297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.050 [2024-11-20 16:58:57.180823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.050 [2024-11-20 16:58:57.180976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.050 [2024-11-20 16:58:57.180979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.991 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.991 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:05.991 16:58:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:06.931 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.932 malloc0 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.932 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:06.932 00:16:06.932 00:16:06.932 CUnit - A unit testing framework for C - Version 2.1-3 00:16:06.932 http://cunit.sourceforge.net/ 00:16:06.932 00:16:06.932 00:16:06.932 Suite: nvme_compliance 00:16:06.932 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 16:58:59.101608] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.932 [2024-11-20 16:58:59.102905] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:06.932 [2024-11-20 16:58:59.102917] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:06.932 [2024-11-20 16:58:59.102922] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:06.932 [2024-11-20 16:58:59.104628] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.192 passed 00:16:07.192 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 16:58:59.182149] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.192 [2024-11-20 16:58:59.185169] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.192 passed 00:16:07.192 Test: admin_identify_ns ...[2024-11-20 16:58:59.263769] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.192 [2024-11-20 16:58:59.323166] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:07.192 [2024-11-20 16:58:59.331168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:07.192 [2024-11-20 16:58:59.352252] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.451 passed 00:16:07.451 Test: admin_get_features_mandatory_features ...[2024-11-20 16:58:59.426473] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.451 [2024-11-20 16:58:59.429500] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.451 passed 00:16:07.451 Test: admin_get_features_optional_features ...[2024-11-20 16:58:59.509979] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.451 [2024-11-20 16:58:59.513002] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.451 passed 00:16:07.451 Test: admin_set_features_number_of_queues ...[2024-11-20 16:58:59.585718] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.711 [2024-11-20 16:58:59.691246] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.711 passed 00:16:07.711 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 16:58:59.764445] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.711 [2024-11-20 16:58:59.767466] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.711 passed 00:16:07.711 Test: admin_get_log_page_with_lpo ...[2024-11-20 16:58:59.843214] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.971 [2024-11-20 16:58:59.915168] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:07.971 [2024-11-20 16:58:59.928216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.971 passed 00:16:07.971 Test: fabric_property_get ...[2024-11-20 16:58:59.998417] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.971 [2024-11-20 16:58:59.999613] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:07.971 [2024-11-20 16:59:00.001433] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.971 passed 00:16:07.971 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 16:59:00.079912] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.971 [2024-11-20 16:59:00.081110] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:07.971 [2024-11-20 16:59:00.082935] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.971 passed 00:16:08.232 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 16:59:00.156671] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.232 [2024-11-20 16:59:00.241170] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.232 [2024-11-20 16:59:00.257166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.232 [2024-11-20 16:59:00.262238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.232 passed 00:16:08.232 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 16:59:00.335479] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.232 [2024-11-20 16:59:00.336696] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:08.232 [2024-11-20 16:59:00.338498] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.232 passed 00:16:08.493 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 16:59:00.414524] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.493 [2024-11-20 16:59:00.489172] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:08.493 [2024-11-20 16:59:00.513164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.493 [2024-11-20 16:59:00.521244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.493 passed 00:16:08.493 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 16:59:00.593445] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.493 [2024-11-20 16:59:00.594643] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:08.493 [2024-11-20 16:59:00.594659] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:08.493 [2024-11-20 16:59:00.597463] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.493 passed 00:16:08.752 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 16:59:00.672176] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.752 [2024-11-20 16:59:00.765167] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:08.752 [2024-11-20 16:59:00.773167] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:08.752 [2024-11-20 16:59:00.781164] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:08.752 [2024-11-20 16:59:00.789166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:08.752 [2024-11-20 16:59:00.818237] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.752 passed 00:16:08.752 Test: admin_create_io_sq_verify_pc ...[2024-11-20 16:59:00.894265] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.752 [2024-11-20 16:59:00.912170] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:09.013 [2024-11-20 16:59:00.929450] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:09.013 passed 00:16:09.013 Test: admin_create_io_qp_max_qps ...[2024-11-20 16:59:01.002906] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:09.953 [2024-11-20 16:59:02.111168] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:10.523 [2024-11-20 16:59:02.493785] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.523 passed 00:16:10.523 Test: admin_create_io_sq_shared_cq ...[2024-11-20 16:59:02.569572] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.783 [2024-11-20 16:59:02.702166] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:10.783 [2024-11-20 16:59:02.739203] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.783 passed 00:16:10.783 00:16:10.783 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.783 suites 1 1 n/a 0 0 00:16:10.783 tests 18 18 18 0 0 00:16:10.783 asserts 360 360 360 0 n/a 00:16:10.783 00:16:10.783 Elapsed time = 1.496 seconds 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1923452 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1923452 ']' 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1923452 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923452 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923452' 00:16:10.783 killing process with pid 1923452 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1923452 00:16:10.783 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1923452 00:16:11.043 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:11.043 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:11.043 00:16:11.043 real 0m6.220s 00:16:11.043 user 0m17.593s 00:16:11.043 sys 0m0.559s 00:16:11.043 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.043 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.043 ************************************ 00:16:11.043 END TEST nvmf_vfio_user_nvme_compliance 00:16:11.043 ************************************ 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.043 ************************************ 00:16:11.043 START TEST nvmf_vfio_user_fuzz 00:16:11.043 ************************************ 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:11.043 * Looking for test storage... 00:16:11.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.043 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.304 --rc genhtml_branch_coverage=1 00:16:11.304 --rc genhtml_function_coverage=1 00:16:11.304 --rc genhtml_legend=1 00:16:11.304 --rc geninfo_all_blocks=1 00:16:11.304 --rc geninfo_unexecuted_blocks=1 00:16:11.304 00:16:11.304 ' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.304 --rc genhtml_branch_coverage=1 00:16:11.304 --rc genhtml_function_coverage=1 00:16:11.304 --rc genhtml_legend=1 00:16:11.304 --rc geninfo_all_blocks=1 00:16:11.304 --rc geninfo_unexecuted_blocks=1 00:16:11.304 00:16:11.304 ' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.304 --rc genhtml_branch_coverage=1 00:16:11.304 --rc genhtml_function_coverage=1 00:16:11.304 --rc genhtml_legend=1 00:16:11.304 --rc geninfo_all_blocks=1 00:16:11.304 --rc geninfo_unexecuted_blocks=1 00:16:11.304 00:16:11.304 ' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.304 --rc genhtml_branch_coverage=1 00:16:11.304 --rc genhtml_function_coverage=1 00:16:11.304 --rc genhtml_legend=1 00:16:11.304 --rc geninfo_all_blocks=1 00:16:11.304 --rc geninfo_unexecuted_blocks=1 00:16:11.304 00:16:11.304 ' 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.304 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1924717 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1924717' 00:16:11.305 Process pid: 1924717 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1924717 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1924717 ']' 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.305 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.244 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.244 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:12.244 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.184 malloc0 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:13.184 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:45.283 Fuzzing completed. Shutting down the fuzz application 00:16:45.283 00:16:45.283 Dumping successful admin opcodes: 00:16:45.283 8, 9, 10, 24, 00:16:45.283 Dumping successful io opcodes: 00:16:45.283 0, 00:16:45.283 NS: 0x20000081ef00 I/O qp, Total commands completed: 1428887, total successful commands: 5614, random_seed: 759075328 00:16:45.283 NS: 0x20000081ef00 admin qp, Total commands completed: 355603, total successful commands: 2866, random_seed: 3702227584 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1924717 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1924717 ']' 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1924717 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1924717 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1924717' 00:16:45.283 killing process with pid 1924717 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1924717 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1924717 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:45.283 00:16:45.283 real 0m32.819s 00:16:45.283 user 0m37.734s 00:16:45.283 sys 0m24.595s 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:45.283 ************************************ 00:16:45.283 END TEST nvmf_vfio_user_fuzz 00:16:45.283 ************************************ 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:45.283 ************************************ 00:16:45.283 START TEST nvmf_auth_target 00:16:45.283 ************************************ 00:16:45.283 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:45.283 * Looking for test storage... 00:16:45.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.283 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.284 --rc genhtml_branch_coverage=1 00:16:45.284 --rc genhtml_function_coverage=1 00:16:45.284 --rc genhtml_legend=1 00:16:45.284 --rc geninfo_all_blocks=1 00:16:45.284 --rc geninfo_unexecuted_blocks=1 00:16:45.284 00:16:45.284 ' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.284 --rc genhtml_branch_coverage=1 00:16:45.284 --rc genhtml_function_coverage=1 00:16:45.284 --rc genhtml_legend=1 00:16:45.284 --rc geninfo_all_blocks=1 00:16:45.284 --rc geninfo_unexecuted_blocks=1 00:16:45.284 00:16:45.284 ' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.284 --rc genhtml_branch_coverage=1 00:16:45.284 --rc genhtml_function_coverage=1 00:16:45.284 --rc genhtml_legend=1 00:16:45.284 --rc geninfo_all_blocks=1 00:16:45.284 --rc geninfo_unexecuted_blocks=1 00:16:45.284 00:16:45.284 ' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.284 --rc genhtml_branch_coverage=1 00:16:45.284 --rc genhtml_function_coverage=1 00:16:45.284 --rc genhtml_legend=1 00:16:45.284 --rc geninfo_all_blocks=1 00:16:45.284 --rc geninfo_unexecuted_blocks=1 00:16:45.284 00:16:45.284 ' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:45.284 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:45.285 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:51.884 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:51.884 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:51.884 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:51.884 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.884 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:51.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:16:51.885 00:16:51.885 --- 10.0.0.2 ping statistics --- 00:16:51.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.885 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:16:51.885 00:16:51.885 --- 10.0.0.1 ping statistics --- 00:16:51.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.885 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1934693 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1934693 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1934693 ']' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.885 16:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1935041 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b982edcc68bc30a40b719236603cdb333260337c132fd09b 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1NU 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b982edcc68bc30a40b719236603cdb333260337c132fd09b 0 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b982edcc68bc30a40b719236603cdb333260337c132fd09b 0 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b982edcc68bc30a40b719236603cdb333260337c132fd09b 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1NU 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1NU 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1NU 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:52.457 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b49a3d6b001941231d31e5aac841e91bee9c3bc5875e59ac1597d5ea37f66b85 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iDG 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b49a3d6b001941231d31e5aac841e91bee9c3bc5875e59ac1597d5ea37f66b85 3 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b49a3d6b001941231d31e5aac841e91bee9c3bc5875e59ac1597d5ea37f66b85 3 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b49a3d6b001941231d31e5aac841e91bee9c3bc5875e59ac1597d5ea37f66b85 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iDG 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iDG 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.iDG 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da00a91de35e801827e6e97036ef992f 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jHU 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da00a91de35e801827e6e97036ef992f 1 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da00a91de35e801827e6e97036ef992f 1 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da00a91de35e801827e6e97036ef992f 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jHU 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jHU 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.jHU 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa5623da0f2a5e58fc6c17feee531d36775393c5e4aaf767 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OYr 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa5623da0f2a5e58fc6c17feee531d36775393c5e4aaf767 2 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa5623da0f2a5e58fc6c17feee531d36775393c5e4aaf767 2 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa5623da0f2a5e58fc6c17feee531d36775393c5e4aaf767 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:52.749 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OYr 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OYr 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.OYr 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=17420660489076e72fdfc80f17319ae7222d6a73e8e4cd55 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PKp 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 17420660489076e72fdfc80f17319ae7222d6a73e8e4cd55 2 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 17420660489076e72fdfc80f17319ae7222d6a73e8e4cd55 2 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=17420660489076e72fdfc80f17319ae7222d6a73e8e4cd55 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PKp 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PKp 00:16:52.750 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.PKp 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c4a9cac900b4cc63315ea4877787b0cd 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BM9 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c4a9cac900b4cc63315ea4877787b0cd 1 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c4a9cac900b4cc63315ea4877787b0cd 1 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c4a9cac900b4cc63315ea4877787b0cd 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BM9 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BM9 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.BM9 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f95c72687c1e8c897254dd0c5e7203b9e26f64e02506bb817216abca2433620 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:53.042 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Rst 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f95c72687c1e8c897254dd0c5e7203b9e26f64e02506bb817216abca2433620 3 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f95c72687c1e8c897254dd0c5e7203b9e26f64e02506bb817216abca2433620 3 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f95c72687c1e8c897254dd0c5e7203b9e26f64e02506bb817216abca2433620 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:53.043 16:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Rst 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Rst 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Rst 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1934693 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1934693 ']' 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.043 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1935041 /var/tmp/host.sock 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1935041 ']' 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:53.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1NU 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1NU 00:16:53.325 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1NU 00:16:53.585 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.iDG ]] 00:16:53.585 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iDG 00:16:53.585 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.585 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.585 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.586 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iDG 00:16:53.586 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iDG 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.jHU 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.jHU 00:16:53.846 16:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.jHU 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.OYr ]] 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYr 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYr 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYr 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.PKp 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.PKp 00:16:54.107 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.PKp 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.BM9 ]] 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BM9 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BM9 00:16:54.368 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BM9 00:16:54.628 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:54.628 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rst 00:16:54.628 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.628 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.629 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.629 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Rst 00:16:54.629 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Rst 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.889 16:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.889 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.150 00:16:55.150 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.150 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.150 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.412 { 00:16:55.412 "cntlid": 1, 00:16:55.412 "qid": 0, 00:16:55.412 "state": "enabled", 00:16:55.412 "thread": "nvmf_tgt_poll_group_000", 00:16:55.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.412 "listen_address": { 00:16:55.412 "trtype": "TCP", 00:16:55.412 "adrfam": "IPv4", 00:16:55.412 "traddr": "10.0.0.2", 00:16:55.412 "trsvcid": "4420" 00:16:55.412 }, 00:16:55.412 "peer_address": { 00:16:55.412 "trtype": "TCP", 00:16:55.412 "adrfam": "IPv4", 00:16:55.412 "traddr": "10.0.0.1", 00:16:55.412 "trsvcid": "60484" 00:16:55.412 }, 00:16:55.412 "auth": { 00:16:55.412 "state": "completed", 00:16:55.412 "digest": "sha256", 00:16:55.412 "dhgroup": "null" 00:16:55.412 } 00:16:55.412 } 00:16:55.412 ]' 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.412 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:16:55.673 16:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.617 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.877 00:16:56.877 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.877 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.877 16:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.140 { 00:16:57.140 "cntlid": 3, 00:16:57.140 "qid": 0, 00:16:57.140 "state": "enabled", 00:16:57.140 "thread": "nvmf_tgt_poll_group_000", 00:16:57.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.140 "listen_address": { 00:16:57.140 "trtype": "TCP", 00:16:57.140 "adrfam": "IPv4", 00:16:57.140 "traddr": "10.0.0.2", 00:16:57.140 "trsvcid": "4420" 00:16:57.140 }, 00:16:57.140 "peer_address": { 00:16:57.140 "trtype": "TCP", 00:16:57.140 "adrfam": "IPv4", 00:16:57.140 "traddr": "10.0.0.1", 00:16:57.140 "trsvcid": "60516" 00:16:57.140 }, 00:16:57.140 "auth": { 00:16:57.140 "state": "completed", 00:16:57.140 "digest": "sha256", 00:16:57.140 "dhgroup": "null" 00:16:57.140 } 00:16:57.140 } 00:16:57.140 ]' 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.140 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.400 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:16:57.401 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:57.971 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.232 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.492 00:16:58.492 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.492 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.492 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.752 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.753 { 00:16:58.753 "cntlid": 5, 00:16:58.753 "qid": 0, 00:16:58.753 "state": "enabled", 00:16:58.753 "thread": "nvmf_tgt_poll_group_000", 00:16:58.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.753 "listen_address": { 00:16:58.753 "trtype": "TCP", 00:16:58.753 "adrfam": "IPv4", 00:16:58.753 "traddr": "10.0.0.2", 00:16:58.753 "trsvcid": "4420" 00:16:58.753 }, 00:16:58.753 "peer_address": { 00:16:58.753 "trtype": "TCP", 00:16:58.753 "adrfam": "IPv4", 00:16:58.753 "traddr": "10.0.0.1", 00:16:58.753 "trsvcid": "60534" 00:16:58.753 }, 00:16:58.753 "auth": { 00:16:58.753 "state": "completed", 00:16:58.753 "digest": "sha256", 00:16:58.753 "dhgroup": "null" 00:16:58.753 } 00:16:58.753 } 00:16:58.753 ]' 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.753 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.014 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:16:59.014 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:16:59.585 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.585 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.585 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.585 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.586 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.586 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.586 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.586 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.847 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.106 00:17:00.106 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.106 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.106 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.367 { 00:17:00.367 "cntlid": 7, 00:17:00.367 "qid": 0, 00:17:00.367 "state": "enabled", 00:17:00.367 "thread": "nvmf_tgt_poll_group_000", 00:17:00.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.367 "listen_address": { 00:17:00.367 "trtype": "TCP", 00:17:00.367 "adrfam": "IPv4", 00:17:00.367 "traddr": "10.0.0.2", 00:17:00.367 "trsvcid": "4420" 00:17:00.367 }, 00:17:00.367 "peer_address": { 00:17:00.367 "trtype": "TCP", 00:17:00.367 "adrfam": "IPv4", 00:17:00.367 "traddr": "10.0.0.1", 00:17:00.367 "trsvcid": "60568" 00:17:00.367 }, 00:17:00.367 "auth": { 00:17:00.367 "state": "completed", 00:17:00.367 "digest": "sha256", 00:17:00.367 "dhgroup": "null" 00:17:00.367 } 00:17:00.367 } 00:17:00.367 ]' 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.367 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.628 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:00.628 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.200 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.460 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.720 00:17:01.720 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.720 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.720 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.980 { 00:17:01.980 "cntlid": 9, 00:17:01.980 "qid": 0, 00:17:01.980 "state": "enabled", 00:17:01.980 "thread": "nvmf_tgt_poll_group_000", 00:17:01.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.980 "listen_address": { 00:17:01.980 "trtype": "TCP", 00:17:01.980 "adrfam": "IPv4", 00:17:01.980 "traddr": "10.0.0.2", 00:17:01.980 "trsvcid": "4420" 00:17:01.980 }, 00:17:01.980 "peer_address": { 00:17:01.980 "trtype": "TCP", 00:17:01.980 "adrfam": "IPv4", 00:17:01.980 "traddr": "10.0.0.1", 00:17:01.980 "trsvcid": "60600" 00:17:01.980 }, 00:17:01.980 "auth": { 00:17:01.980 "state": "completed", 00:17:01.980 "digest": "sha256", 00:17:01.980 "dhgroup": "ffdhe2048" 00:17:01.980 } 00:17:01.980 } 00:17:01.980 ]' 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.980 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.980 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.980 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.980 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.980 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.980 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.239 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:02.239 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.808 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.069 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.329 00:17:03.329 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.329 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.329 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.588 { 00:17:03.588 "cntlid": 11, 00:17:03.588 "qid": 0, 00:17:03.588 "state": "enabled", 00:17:03.588 "thread": "nvmf_tgt_poll_group_000", 00:17:03.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.588 "listen_address": { 00:17:03.588 "trtype": "TCP", 00:17:03.588 "adrfam": "IPv4", 00:17:03.588 "traddr": "10.0.0.2", 00:17:03.588 "trsvcid": "4420" 00:17:03.588 }, 00:17:03.588 "peer_address": { 00:17:03.588 "trtype": "TCP", 00:17:03.588 "adrfam": "IPv4", 00:17:03.588 "traddr": "10.0.0.1", 00:17:03.588 "trsvcid": "57248" 00:17:03.588 }, 00:17:03.588 "auth": { 00:17:03.588 "state": "completed", 00:17:03.588 "digest": "sha256", 00:17:03.588 "dhgroup": "ffdhe2048" 00:17:03.588 } 00:17:03.588 } 00:17:03.588 ]' 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.588 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.847 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:03.847 16:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:04.417 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.417 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.418 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.418 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.418 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.418 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.418 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.418 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.678 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.939 00:17:04.939 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.939 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.939 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.199 { 00:17:05.199 "cntlid": 13, 00:17:05.199 "qid": 0, 00:17:05.199 "state": "enabled", 00:17:05.199 "thread": "nvmf_tgt_poll_group_000", 00:17:05.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.199 "listen_address": { 00:17:05.199 "trtype": "TCP", 00:17:05.199 "adrfam": "IPv4", 00:17:05.199 "traddr": "10.0.0.2", 00:17:05.199 "trsvcid": "4420" 00:17:05.199 }, 00:17:05.199 "peer_address": { 00:17:05.199 "trtype": "TCP", 00:17:05.199 "adrfam": "IPv4", 00:17:05.199 "traddr": "10.0.0.1", 00:17:05.199 "trsvcid": "57284" 00:17:05.199 }, 00:17:05.199 "auth": { 00:17:05.199 "state": "completed", 00:17:05.199 "digest": "sha256", 00:17:05.199 "dhgroup": "ffdhe2048" 00:17:05.199 } 00:17:05.199 } 00:17:05.199 ]' 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.199 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.459 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:05.459 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.030 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.291 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.551 00:17:06.551 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.551 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.551 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.812 { 00:17:06.812 "cntlid": 15, 00:17:06.812 "qid": 0, 00:17:06.812 "state": "enabled", 00:17:06.812 "thread": "nvmf_tgt_poll_group_000", 00:17:06.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.812 "listen_address": { 00:17:06.812 "trtype": "TCP", 00:17:06.812 "adrfam": "IPv4", 00:17:06.812 "traddr": "10.0.0.2", 00:17:06.812 "trsvcid": "4420" 00:17:06.812 }, 00:17:06.812 "peer_address": { 00:17:06.812 "trtype": "TCP", 00:17:06.812 "adrfam": "IPv4", 00:17:06.812 "traddr": "10.0.0.1", 00:17:06.812 "trsvcid": "57292" 00:17:06.812 }, 00:17:06.812 "auth": { 00:17:06.812 "state": "completed", 00:17:06.812 "digest": "sha256", 00:17:06.812 "dhgroup": "ffdhe2048" 00:17:06.812 } 00:17:06.812 } 00:17:06.812 ]' 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.812 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.074 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:07.074 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:07.645 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.645 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.645 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.645 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.906 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.906 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.906 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.906 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.906 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.167 00:17:08.167 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.167 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.167 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.428 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.428 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.428 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.428 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.428 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.428 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.428 { 00:17:08.428 "cntlid": 17, 00:17:08.428 "qid": 0, 00:17:08.429 "state": "enabled", 00:17:08.429 "thread": "nvmf_tgt_poll_group_000", 00:17:08.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.429 "listen_address": { 00:17:08.429 "trtype": "TCP", 00:17:08.429 "adrfam": "IPv4", 00:17:08.429 "traddr": "10.0.0.2", 00:17:08.429 "trsvcid": "4420" 00:17:08.429 }, 00:17:08.429 "peer_address": { 00:17:08.429 "trtype": "TCP", 00:17:08.429 "adrfam": "IPv4", 00:17:08.429 "traddr": "10.0.0.1", 00:17:08.429 "trsvcid": "57324" 00:17:08.429 }, 00:17:08.429 "auth": { 00:17:08.429 "state": "completed", 00:17:08.429 "digest": "sha256", 00:17:08.429 "dhgroup": "ffdhe3072" 00:17:08.429 } 00:17:08.429 } 00:17:08.429 ]' 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.429 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.689 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:08.689 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.261 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.522 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.783 00:17:09.783 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.783 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.783 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.043 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.043 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.043 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.043 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.043 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.043 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.043 { 00:17:10.043 "cntlid": 19, 00:17:10.043 "qid": 0, 00:17:10.043 "state": "enabled", 00:17:10.043 "thread": "nvmf_tgt_poll_group_000", 00:17:10.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.043 "listen_address": { 00:17:10.043 "trtype": "TCP", 00:17:10.043 "adrfam": "IPv4", 00:17:10.043 "traddr": "10.0.0.2", 00:17:10.043 "trsvcid": "4420" 00:17:10.043 }, 00:17:10.043 "peer_address": { 00:17:10.043 "trtype": "TCP", 00:17:10.043 "adrfam": "IPv4", 00:17:10.043 "traddr": "10.0.0.1", 00:17:10.043 "trsvcid": "57352" 00:17:10.044 }, 00:17:10.044 "auth": { 00:17:10.044 "state": "completed", 00:17:10.044 "digest": "sha256", 00:17:10.044 "dhgroup": "ffdhe3072" 00:17:10.044 } 00:17:10.044 } 00:17:10.044 ]' 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.044 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.304 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:10.304 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:10.875 17:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.875 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.136 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.397 00:17:11.397 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.397 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.397 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.659 { 00:17:11.659 "cntlid": 21, 00:17:11.659 "qid": 0, 00:17:11.659 "state": "enabled", 00:17:11.659 "thread": "nvmf_tgt_poll_group_000", 00:17:11.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.659 "listen_address": { 00:17:11.659 "trtype": "TCP", 00:17:11.659 "adrfam": "IPv4", 00:17:11.659 "traddr": "10.0.0.2", 00:17:11.659 "trsvcid": "4420" 00:17:11.659 }, 00:17:11.659 "peer_address": { 00:17:11.659 "trtype": "TCP", 00:17:11.659 "adrfam": "IPv4", 00:17:11.659 "traddr": "10.0.0.1", 00:17:11.659 "trsvcid": "57394" 00:17:11.659 }, 00:17:11.659 "auth": { 00:17:11.659 "state": "completed", 00:17:11.659 "digest": "sha256", 00:17:11.659 "dhgroup": "ffdhe3072" 00:17:11.659 } 00:17:11.659 } 00:17:11.659 ]' 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.659 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.920 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:11.920 17:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.491 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.752 17:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.012 00:17:13.012 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.012 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.013 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.272 { 00:17:13.272 "cntlid": 23, 00:17:13.272 "qid": 0, 00:17:13.272 "state": "enabled", 00:17:13.272 "thread": "nvmf_tgt_poll_group_000", 00:17:13.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.272 "listen_address": { 00:17:13.272 "trtype": "TCP", 00:17:13.272 "adrfam": "IPv4", 00:17:13.272 "traddr": "10.0.0.2", 00:17:13.272 "trsvcid": "4420" 00:17:13.272 }, 00:17:13.272 "peer_address": { 00:17:13.272 "trtype": "TCP", 00:17:13.272 "adrfam": "IPv4", 00:17:13.272 "traddr": "10.0.0.1", 00:17:13.272 "trsvcid": "60268" 00:17:13.272 }, 00:17:13.272 "auth": { 00:17:13.272 "state": "completed", 00:17:13.272 "digest": "sha256", 00:17:13.272 "dhgroup": "ffdhe3072" 00:17:13.272 } 00:17:13.272 } 00:17:13.272 ]' 00:17:13.272 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.273 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.533 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:13.533 17:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.104 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.364 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.625 00:17:14.625 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.625 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.625 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.885 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.885 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.885 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.885 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.885 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.885 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.885 { 00:17:14.885 "cntlid": 25, 00:17:14.886 "qid": 0, 00:17:14.886 "state": "enabled", 00:17:14.886 "thread": "nvmf_tgt_poll_group_000", 00:17:14.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.886 "listen_address": { 00:17:14.886 "trtype": "TCP", 00:17:14.886 "adrfam": "IPv4", 00:17:14.886 "traddr": "10.0.0.2", 00:17:14.886 "trsvcid": "4420" 00:17:14.886 }, 00:17:14.886 "peer_address": { 00:17:14.886 "trtype": "TCP", 00:17:14.886 "adrfam": "IPv4", 00:17:14.886 "traddr": "10.0.0.1", 00:17:14.886 "trsvcid": "60300" 00:17:14.886 }, 00:17:14.886 "auth": { 00:17:14.886 "state": "completed", 00:17:14.886 "digest": "sha256", 00:17:14.886 "dhgroup": "ffdhe4096" 00:17:14.886 } 00:17:14.886 } 00:17:14.886 ]' 00:17:14.886 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.886 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.886 17:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.886 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.886 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.886 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.886 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.886 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.147 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:15.147 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:15.738 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.739 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.739 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.739 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.739 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.998 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.999 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.999 17:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.999 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.259 00:17:16.259 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.259 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.259 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.519 { 00:17:16.519 "cntlid": 27, 00:17:16.519 "qid": 0, 00:17:16.519 "state": "enabled", 00:17:16.519 "thread": "nvmf_tgt_poll_group_000", 00:17:16.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.519 "listen_address": { 00:17:16.519 "trtype": "TCP", 00:17:16.519 "adrfam": "IPv4", 00:17:16.519 "traddr": "10.0.0.2", 00:17:16.519 "trsvcid": "4420" 00:17:16.519 }, 00:17:16.519 "peer_address": { 00:17:16.519 "trtype": "TCP", 00:17:16.519 "adrfam": "IPv4", 00:17:16.519 "traddr": "10.0.0.1", 00:17:16.519 "trsvcid": "60338" 00:17:16.519 }, 00:17:16.519 "auth": { 00:17:16.519 "state": "completed", 00:17:16.519 "digest": "sha256", 00:17:16.519 "dhgroup": "ffdhe4096" 00:17:16.519 } 00:17:16.519 } 00:17:16.519 ]' 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.519 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.778 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:16.778 17:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.347 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.608 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.869 00:17:17.869 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.869 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.869 17:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.130 { 00:17:18.130 "cntlid": 29, 00:17:18.130 "qid": 0, 00:17:18.130 "state": "enabled", 00:17:18.130 "thread": "nvmf_tgt_poll_group_000", 00:17:18.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.130 "listen_address": { 00:17:18.130 "trtype": "TCP", 00:17:18.130 "adrfam": "IPv4", 00:17:18.130 "traddr": "10.0.0.2", 00:17:18.130 "trsvcid": "4420" 00:17:18.130 }, 00:17:18.130 "peer_address": { 00:17:18.130 "trtype": "TCP", 00:17:18.130 "adrfam": "IPv4", 00:17:18.130 "traddr": "10.0.0.1", 00:17:18.130 "trsvcid": "60366" 00:17:18.130 }, 00:17:18.130 "auth": { 00:17:18.130 "state": "completed", 00:17:18.130 "digest": "sha256", 00:17:18.130 "dhgroup": "ffdhe4096" 00:17:18.130 } 00:17:18.130 } 00:17:18.130 ]' 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.130 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.391 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:18.391 17:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:18.960 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.221 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.482 00:17:19.482 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.482 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.482 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.743 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.743 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.743 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.743 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.744 { 00:17:19.744 "cntlid": 31, 00:17:19.744 "qid": 0, 00:17:19.744 "state": "enabled", 00:17:19.744 "thread": "nvmf_tgt_poll_group_000", 00:17:19.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.744 "listen_address": { 00:17:19.744 "trtype": "TCP", 00:17:19.744 "adrfam": "IPv4", 00:17:19.744 "traddr": "10.0.0.2", 00:17:19.744 "trsvcid": "4420" 00:17:19.744 }, 00:17:19.744 "peer_address": { 00:17:19.744 "trtype": "TCP", 00:17:19.744 "adrfam": "IPv4", 00:17:19.744 "traddr": "10.0.0.1", 00:17:19.744 "trsvcid": "60386" 00:17:19.744 }, 00:17:19.744 "auth": { 00:17:19.744 "state": "completed", 00:17:19.744 "digest": "sha256", 00:17:19.744 "dhgroup": "ffdhe4096" 00:17:19.744 } 00:17:19.744 } 00:17:19.744 ]' 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.744 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.006 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.006 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.006 17:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.006 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:20.006 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:20.576 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.837 17:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.098 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.358 { 00:17:21.358 "cntlid": 33, 00:17:21.358 "qid": 0, 00:17:21.358 "state": "enabled", 00:17:21.358 "thread": "nvmf_tgt_poll_group_000", 00:17:21.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.358 "listen_address": { 00:17:21.358 "trtype": "TCP", 00:17:21.358 "adrfam": "IPv4", 00:17:21.358 "traddr": "10.0.0.2", 00:17:21.358 "trsvcid": "4420" 00:17:21.358 }, 00:17:21.358 "peer_address": { 00:17:21.358 "trtype": "TCP", 00:17:21.358 "adrfam": "IPv4", 00:17:21.358 "traddr": "10.0.0.1", 00:17:21.358 "trsvcid": "60408" 00:17:21.358 }, 00:17:21.358 "auth": { 00:17:21.358 "state": "completed", 00:17:21.358 "digest": "sha256", 00:17:21.358 "dhgroup": "ffdhe6144" 00:17:21.358 } 00:17:21.358 } 00:17:21.358 ]' 00:17:21.358 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.618 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.877 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:21.877 17:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.447 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.707 17:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.967 00:17:22.967 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.967 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.968 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.228 { 00:17:23.228 "cntlid": 35, 00:17:23.228 "qid": 0, 00:17:23.228 "state": "enabled", 00:17:23.228 "thread": "nvmf_tgt_poll_group_000", 00:17:23.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.228 "listen_address": { 00:17:23.228 "trtype": "TCP", 00:17:23.228 "adrfam": "IPv4", 00:17:23.228 "traddr": "10.0.0.2", 00:17:23.228 "trsvcid": "4420" 00:17:23.228 }, 00:17:23.228 "peer_address": { 00:17:23.228 "trtype": "TCP", 00:17:23.228 "adrfam": "IPv4", 00:17:23.228 "traddr": "10.0.0.1", 00:17:23.228 "trsvcid": "52260" 00:17:23.228 }, 00:17:23.228 "auth": { 00:17:23.228 "state": "completed", 00:17:23.228 "digest": "sha256", 00:17:23.228 "dhgroup": "ffdhe6144" 00:17:23.228 } 00:17:23.228 } 00:17:23.228 ]' 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.228 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.488 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:23.488 17:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:24.058 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.059 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.319 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.579 00:17:24.579 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.579 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.579 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.839 { 00:17:24.839 "cntlid": 37, 00:17:24.839 "qid": 0, 00:17:24.839 "state": "enabled", 00:17:24.839 "thread": "nvmf_tgt_poll_group_000", 00:17:24.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.839 "listen_address": { 00:17:24.839 "trtype": "TCP", 00:17:24.839 "adrfam": "IPv4", 00:17:24.839 "traddr": "10.0.0.2", 00:17:24.839 "trsvcid": "4420" 00:17:24.839 }, 00:17:24.839 "peer_address": { 00:17:24.839 "trtype": "TCP", 00:17:24.839 "adrfam": "IPv4", 00:17:24.839 "traddr": "10.0.0.1", 00:17:24.839 "trsvcid": "52290" 00:17:24.839 }, 00:17:24.839 "auth": { 00:17:24.839 "state": "completed", 00:17:24.839 "digest": "sha256", 00:17:24.839 "dhgroup": "ffdhe6144" 00:17:24.839 } 00:17:24.839 } 00:17:24.839 ]' 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.839 17:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.100 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.101 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.101 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.101 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:25.101 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:25.673 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.933 17:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.933 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.502 00:17:26.502 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.502 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.503 { 00:17:26.503 "cntlid": 39, 00:17:26.503 "qid": 0, 00:17:26.503 "state": "enabled", 00:17:26.503 "thread": "nvmf_tgt_poll_group_000", 00:17:26.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.503 "listen_address": { 00:17:26.503 "trtype": "TCP", 00:17:26.503 "adrfam": "IPv4", 00:17:26.503 "traddr": "10.0.0.2", 00:17:26.503 "trsvcid": "4420" 00:17:26.503 }, 00:17:26.503 "peer_address": { 00:17:26.503 "trtype": "TCP", 00:17:26.503 "adrfam": "IPv4", 00:17:26.503 "traddr": "10.0.0.1", 00:17:26.503 "trsvcid": "52324" 00:17:26.503 }, 00:17:26.503 "auth": { 00:17:26.503 "state": "completed", 00:17:26.503 "digest": "sha256", 00:17:26.503 "dhgroup": "ffdhe6144" 00:17:26.503 } 00:17:26.503 } 00:17:26.503 ]' 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.503 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:26.763 17:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.703 17:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.274 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.274 { 00:17:28.274 "cntlid": 41, 00:17:28.274 "qid": 0, 00:17:28.274 "state": "enabled", 00:17:28.274 "thread": "nvmf_tgt_poll_group_000", 00:17:28.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.274 "listen_address": { 00:17:28.274 "trtype": "TCP", 00:17:28.274 "adrfam": "IPv4", 00:17:28.274 "traddr": "10.0.0.2", 00:17:28.274 "trsvcid": "4420" 00:17:28.274 }, 00:17:28.274 "peer_address": { 00:17:28.274 "trtype": "TCP", 00:17:28.274 "adrfam": "IPv4", 00:17:28.274 "traddr": "10.0.0.1", 00:17:28.274 "trsvcid": "52340" 00:17:28.274 }, 00:17:28.274 "auth": { 00:17:28.274 "state": "completed", 00:17:28.274 "digest": "sha256", 00:17:28.274 "dhgroup": "ffdhe8192" 00:17:28.274 } 00:17:28.274 } 00:17:28.274 ]' 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.274 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:28.535 17:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.475 17:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.045 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.045 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.309 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.309 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.309 { 00:17:30.309 "cntlid": 43, 00:17:30.309 "qid": 0, 00:17:30.309 "state": "enabled", 00:17:30.309 "thread": "nvmf_tgt_poll_group_000", 00:17:30.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.310 "listen_address": { 00:17:30.310 "trtype": "TCP", 00:17:30.310 "adrfam": "IPv4", 00:17:30.310 "traddr": "10.0.0.2", 00:17:30.310 "trsvcid": "4420" 00:17:30.310 }, 00:17:30.310 "peer_address": { 00:17:30.310 "trtype": "TCP", 00:17:30.310 "adrfam": "IPv4", 00:17:30.310 "traddr": "10.0.0.1", 00:17:30.310 "trsvcid": "52372" 00:17:30.310 }, 00:17:30.310 "auth": { 00:17:30.310 "state": "completed", 00:17:30.310 "digest": "sha256", 00:17:30.310 "dhgroup": "ffdhe8192" 00:17:30.310 } 00:17:30.310 } 00:17:30.310 ]' 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.310 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.570 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:30.570 17:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.147 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.459 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.744 00:17:31.744 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.744 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.744 17:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.050 { 00:17:32.050 "cntlid": 45, 00:17:32.050 "qid": 0, 00:17:32.050 "state": "enabled", 00:17:32.050 "thread": "nvmf_tgt_poll_group_000", 00:17:32.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.050 "listen_address": { 00:17:32.050 "trtype": "TCP", 00:17:32.050 "adrfam": "IPv4", 00:17:32.050 "traddr": "10.0.0.2", 00:17:32.050 "trsvcid": "4420" 00:17:32.050 }, 00:17:32.050 "peer_address": { 00:17:32.050 "trtype": "TCP", 00:17:32.050 "adrfam": "IPv4", 00:17:32.050 "traddr": "10.0.0.1", 00:17:32.050 "trsvcid": "52388" 00:17:32.050 }, 00:17:32.050 "auth": { 00:17:32.050 "state": "completed", 00:17:32.050 "digest": "sha256", 00:17:32.050 "dhgroup": "ffdhe8192" 00:17:32.050 } 00:17:32.050 } 00:17:32.050 ]' 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.050 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.310 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:32.310 17:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:33.249 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.249 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.249 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.249 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.249 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.249 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.250 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.821 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.821 { 00:17:33.821 "cntlid": 47, 00:17:33.821 "qid": 0, 00:17:33.821 "state": "enabled", 00:17:33.821 "thread": "nvmf_tgt_poll_group_000", 00:17:33.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.821 "listen_address": { 00:17:33.821 "trtype": "TCP", 00:17:33.821 "adrfam": "IPv4", 00:17:33.821 "traddr": "10.0.0.2", 00:17:33.821 "trsvcid": "4420" 00:17:33.821 }, 00:17:33.821 "peer_address": { 00:17:33.821 "trtype": "TCP", 00:17:33.821 "adrfam": "IPv4", 00:17:33.821 "traddr": "10.0.0.1", 00:17:33.821 "trsvcid": "34118" 00:17:33.821 }, 00:17:33.821 "auth": { 00:17:33.821 "state": "completed", 00:17:33.821 "digest": "sha256", 00:17:33.821 "dhgroup": "ffdhe8192" 00:17:33.821 } 00:17:33.821 } 00:17:33.821 ]' 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.821 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.081 17:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.081 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.081 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.081 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.081 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.081 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.340 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:34.340 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.909 17:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.170 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.430 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.430 { 00:17:35.430 "cntlid": 49, 00:17:35.430 "qid": 0, 00:17:35.430 "state": "enabled", 00:17:35.430 "thread": "nvmf_tgt_poll_group_000", 00:17:35.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.430 "listen_address": { 00:17:35.430 "trtype": "TCP", 00:17:35.430 "adrfam": "IPv4", 00:17:35.430 "traddr": "10.0.0.2", 00:17:35.430 "trsvcid": "4420" 00:17:35.430 }, 00:17:35.430 "peer_address": { 00:17:35.430 "trtype": "TCP", 00:17:35.430 "adrfam": "IPv4", 00:17:35.430 "traddr": "10.0.0.1", 00:17:35.430 "trsvcid": "34152" 00:17:35.430 }, 00:17:35.430 "auth": { 00:17:35.430 "state": "completed", 00:17:35.430 "digest": "sha384", 00:17:35.430 "dhgroup": "null" 00:17:35.430 } 00:17:35.430 } 00:17:35.430 ]' 00:17:35.430 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.689 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.949 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:35.949 17:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.519 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.779 17:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.040 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.040 { 00:17:37.040 "cntlid": 51, 00:17:37.040 "qid": 0, 00:17:37.040 "state": "enabled", 00:17:37.040 "thread": "nvmf_tgt_poll_group_000", 00:17:37.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.040 "listen_address": { 00:17:37.040 "trtype": "TCP", 00:17:37.040 "adrfam": "IPv4", 00:17:37.040 "traddr": "10.0.0.2", 00:17:37.040 "trsvcid": "4420" 00:17:37.040 }, 00:17:37.040 "peer_address": { 00:17:37.040 "trtype": "TCP", 00:17:37.040 "adrfam": "IPv4", 00:17:37.040 "traddr": "10.0.0.1", 00:17:37.040 "trsvcid": "34178" 00:17:37.040 }, 00:17:37.040 "auth": { 00:17:37.040 "state": "completed", 00:17:37.040 "digest": "sha384", 00:17:37.040 "dhgroup": "null" 00:17:37.040 } 00:17:37.040 } 00:17:37.040 ]' 00:17:37.040 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.300 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.559 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:37.559 17:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.129 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.391 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.652 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.652 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.652 { 00:17:38.652 "cntlid": 53, 00:17:38.652 "qid": 0, 00:17:38.652 "state": "enabled", 00:17:38.652 "thread": "nvmf_tgt_poll_group_000", 00:17:38.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.652 "listen_address": { 00:17:38.652 "trtype": "TCP", 00:17:38.652 "adrfam": "IPv4", 00:17:38.652 "traddr": "10.0.0.2", 00:17:38.652 "trsvcid": "4420" 00:17:38.652 }, 00:17:38.652 "peer_address": { 00:17:38.652 "trtype": "TCP", 00:17:38.652 "adrfam": "IPv4", 00:17:38.652 "traddr": "10.0.0.1", 00:17:38.652 "trsvcid": "34208" 00:17:38.652 }, 00:17:38.652 "auth": { 00:17:38.652 "state": "completed", 00:17:38.652 "digest": "sha384", 00:17:38.652 "dhgroup": "null" 00:17:38.652 } 00:17:38.652 } 00:17:38.652 ]' 00:17:38.653 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.914 17:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.176 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:39.176 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:39.747 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.747 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.747 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.748 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.748 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.748 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.748 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:39.748 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.008 17:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.008 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.008 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.008 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.008 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.269 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.269 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.530 { 00:17:40.530 "cntlid": 55, 00:17:40.530 "qid": 0, 00:17:40.530 "state": "enabled", 00:17:40.530 "thread": "nvmf_tgt_poll_group_000", 00:17:40.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.530 "listen_address": { 00:17:40.530 "trtype": "TCP", 00:17:40.530 "adrfam": "IPv4", 00:17:40.530 "traddr": "10.0.0.2", 00:17:40.530 "trsvcid": "4420" 00:17:40.530 }, 00:17:40.530 "peer_address": { 00:17:40.530 "trtype": "TCP", 00:17:40.530 "adrfam": "IPv4", 00:17:40.530 "traddr": "10.0.0.1", 00:17:40.530 "trsvcid": "34226" 00:17:40.530 }, 00:17:40.530 "auth": { 00:17:40.530 "state": "completed", 00:17:40.530 "digest": "sha384", 00:17:40.530 "dhgroup": "null" 00:17:40.530 } 00:17:40.530 } 00:17:40.530 ]' 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.530 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.800 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:40.800 17:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.378 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.638 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.897 00:17:41.897 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.897 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.897 17:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.897 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.897 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.897 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.897 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.897 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.897 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.897 { 00:17:41.897 "cntlid": 57, 00:17:41.897 "qid": 0, 00:17:41.897 "state": "enabled", 00:17:41.897 "thread": "nvmf_tgt_poll_group_000", 00:17:41.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.897 "listen_address": { 00:17:41.897 "trtype": "TCP", 00:17:41.897 "adrfam": "IPv4", 00:17:41.897 "traddr": "10.0.0.2", 00:17:41.897 "trsvcid": "4420" 00:17:41.897 }, 00:17:41.897 "peer_address": { 00:17:41.897 "trtype": "TCP", 00:17:41.897 "adrfam": "IPv4", 00:17:41.897 "traddr": "10.0.0.1", 00:17:41.897 "trsvcid": "34242" 00:17:41.897 }, 00:17:41.897 "auth": { 00:17:41.897 "state": "completed", 00:17:41.897 "digest": "sha384", 00:17:41.897 "dhgroup": "ffdhe2048" 00:17:41.897 } 00:17:41.897 } 00:17:41.897 ]' 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.157 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.417 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:42.417 17:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.987 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.248 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.508 00:17:43.508 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.508 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.508 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.768 { 00:17:43.768 "cntlid": 59, 00:17:43.768 "qid": 0, 00:17:43.768 "state": "enabled", 00:17:43.768 "thread": "nvmf_tgt_poll_group_000", 00:17:43.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.768 "listen_address": { 00:17:43.768 "trtype": "TCP", 00:17:43.768 "adrfam": "IPv4", 00:17:43.768 "traddr": "10.0.0.2", 00:17:43.768 "trsvcid": "4420" 00:17:43.768 }, 00:17:43.768 "peer_address": { 00:17:43.768 "trtype": "TCP", 00:17:43.768 "adrfam": "IPv4", 00:17:43.768 "traddr": "10.0.0.1", 00:17:43.768 "trsvcid": "38424" 00:17:43.768 }, 00:17:43.768 "auth": { 00:17:43.768 "state": "completed", 00:17:43.768 "digest": "sha384", 00:17:43.768 "dhgroup": "ffdhe2048" 00:17:43.768 } 00:17:43.768 } 00:17:43.768 ]' 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.768 17:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.028 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:44.028 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.598 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.858 17:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.117 00:17:45.117 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.117 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.117 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.377 { 00:17:45.377 "cntlid": 61, 00:17:45.377 "qid": 0, 00:17:45.377 "state": "enabled", 00:17:45.377 "thread": "nvmf_tgt_poll_group_000", 00:17:45.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.377 "listen_address": { 00:17:45.377 "trtype": "TCP", 00:17:45.377 "adrfam": "IPv4", 00:17:45.377 "traddr": "10.0.0.2", 00:17:45.377 "trsvcid": "4420" 00:17:45.377 }, 00:17:45.377 "peer_address": { 00:17:45.377 "trtype": "TCP", 00:17:45.377 "adrfam": "IPv4", 00:17:45.377 "traddr": "10.0.0.1", 00:17:45.377 "trsvcid": "38452" 00:17:45.377 }, 00:17:45.377 "auth": { 00:17:45.377 "state": "completed", 00:17:45.377 "digest": "sha384", 00:17:45.377 "dhgroup": "ffdhe2048" 00:17:45.377 } 00:17:45.377 } 00:17:45.377 ]' 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.377 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.637 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:45.637 17:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.207 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.466 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.724 00:17:46.724 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.724 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.724 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.983 { 00:17:46.983 "cntlid": 63, 00:17:46.983 "qid": 0, 00:17:46.983 "state": "enabled", 00:17:46.983 "thread": "nvmf_tgt_poll_group_000", 00:17:46.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.983 "listen_address": { 00:17:46.983 "trtype": "TCP", 00:17:46.983 "adrfam": "IPv4", 00:17:46.983 "traddr": "10.0.0.2", 00:17:46.983 "trsvcid": "4420" 00:17:46.983 }, 00:17:46.983 "peer_address": { 00:17:46.983 "trtype": "TCP", 00:17:46.983 "adrfam": "IPv4", 00:17:46.983 "traddr": "10.0.0.1", 00:17:46.983 "trsvcid": "38472" 00:17:46.983 }, 00:17:46.983 "auth": { 00:17:46.983 "state": "completed", 00:17:46.983 "digest": "sha384", 00:17:46.983 "dhgroup": "ffdhe2048" 00:17:46.983 } 00:17:46.983 } 00:17:46.983 ]' 00:17:46.983 17:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.983 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.242 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:47.242 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.812 17:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.073 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.333 00:17:48.333 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.333 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.333 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.593 { 00:17:48.593 "cntlid": 65, 00:17:48.593 "qid": 0, 00:17:48.593 "state": "enabled", 00:17:48.593 "thread": "nvmf_tgt_poll_group_000", 00:17:48.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.593 "listen_address": { 00:17:48.593 "trtype": "TCP", 00:17:48.593 "adrfam": "IPv4", 00:17:48.593 "traddr": "10.0.0.2", 00:17:48.593 "trsvcid": "4420" 00:17:48.593 }, 00:17:48.593 "peer_address": { 00:17:48.593 "trtype": "TCP", 00:17:48.593 "adrfam": "IPv4", 00:17:48.593 "traddr": "10.0.0.1", 00:17:48.593 "trsvcid": "38500" 00:17:48.593 }, 00:17:48.593 "auth": { 00:17:48.593 "state": "completed", 00:17:48.593 "digest": "sha384", 00:17:48.593 "dhgroup": "ffdhe3072" 00:17:48.593 } 00:17:48.593 } 00:17:48.593 ]' 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.593 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.853 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:48.854 17:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.424 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.685 17:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.946 00:17:49.946 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.946 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.946 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.205 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.206 { 00:17:50.206 "cntlid": 67, 00:17:50.206 "qid": 0, 00:17:50.206 "state": "enabled", 00:17:50.206 "thread": "nvmf_tgt_poll_group_000", 00:17:50.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.206 "listen_address": { 00:17:50.206 "trtype": "TCP", 00:17:50.206 "adrfam": "IPv4", 00:17:50.206 "traddr": "10.0.0.2", 00:17:50.206 "trsvcid": "4420" 00:17:50.206 }, 00:17:50.206 "peer_address": { 00:17:50.206 "trtype": "TCP", 00:17:50.206 "adrfam": "IPv4", 00:17:50.206 "traddr": "10.0.0.1", 00:17:50.206 "trsvcid": "38532" 00:17:50.206 }, 00:17:50.206 "auth": { 00:17:50.206 "state": "completed", 00:17:50.206 "digest": "sha384", 00:17:50.206 "dhgroup": "ffdhe3072" 00:17:50.206 } 00:17:50.206 } 00:17:50.206 ]' 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.206 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.466 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:50.466 17:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.408 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.668 00:17:51.668 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.668 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.668 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.928 { 00:17:51.928 "cntlid": 69, 00:17:51.928 "qid": 0, 00:17:51.928 "state": "enabled", 00:17:51.928 "thread": "nvmf_tgt_poll_group_000", 00:17:51.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.928 "listen_address": { 00:17:51.928 "trtype": "TCP", 00:17:51.928 "adrfam": "IPv4", 00:17:51.928 "traddr": "10.0.0.2", 00:17:51.928 "trsvcid": "4420" 00:17:51.928 }, 00:17:51.928 "peer_address": { 00:17:51.928 "trtype": "TCP", 00:17:51.928 "adrfam": "IPv4", 00:17:51.928 "traddr": "10.0.0.1", 00:17:51.928 "trsvcid": "38552" 00:17:51.928 }, 00:17:51.928 "auth": { 00:17:51.928 "state": "completed", 00:17:51.928 "digest": "sha384", 00:17:51.928 "dhgroup": "ffdhe3072" 00:17:51.928 } 00:17:51.928 } 00:17:51.928 ]' 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.928 17:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.928 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.928 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.928 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.189 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:52.189 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.759 17:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.018 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.276 00:17:53.276 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.276 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.276 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.535 { 00:17:53.535 "cntlid": 71, 00:17:53.535 "qid": 0, 00:17:53.535 "state": "enabled", 00:17:53.535 "thread": "nvmf_tgt_poll_group_000", 00:17:53.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.535 "listen_address": { 00:17:53.535 "trtype": "TCP", 00:17:53.535 "adrfam": "IPv4", 00:17:53.535 "traddr": "10.0.0.2", 00:17:53.535 "trsvcid": "4420" 00:17:53.535 }, 00:17:53.535 "peer_address": { 00:17:53.535 "trtype": "TCP", 00:17:53.535 "adrfam": "IPv4", 00:17:53.535 "traddr": "10.0.0.1", 00:17:53.535 "trsvcid": "57704" 00:17:53.535 }, 00:17:53.535 "auth": { 00:17:53.535 "state": "completed", 00:17:53.535 "digest": "sha384", 00:17:53.535 "dhgroup": "ffdhe3072" 00:17:53.535 } 00:17:53.535 } 00:17:53.535 ]' 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.535 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.536 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.794 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:53.794 17:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.361 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.362 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.621 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.881 00:17:54.881 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.881 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.881 17:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.140 { 00:17:55.140 "cntlid": 73, 00:17:55.140 "qid": 0, 00:17:55.140 "state": "enabled", 00:17:55.140 "thread": "nvmf_tgt_poll_group_000", 00:17:55.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.140 "listen_address": { 00:17:55.140 "trtype": "TCP", 00:17:55.140 "adrfam": "IPv4", 00:17:55.140 "traddr": "10.0.0.2", 00:17:55.140 "trsvcid": "4420" 00:17:55.140 }, 00:17:55.140 "peer_address": { 00:17:55.140 "trtype": "TCP", 00:17:55.140 "adrfam": "IPv4", 00:17:55.140 "traddr": "10.0.0.1", 00:17:55.140 "trsvcid": "57726" 00:17:55.140 }, 00:17:55.140 "auth": { 00:17:55.140 "state": "completed", 00:17:55.140 "digest": "sha384", 00:17:55.140 "dhgroup": "ffdhe4096" 00:17:55.140 } 00:17:55.140 } 00:17:55.140 ]' 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.140 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.400 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:55.400 17:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.976 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.235 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.496 00:17:56.496 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.496 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.496 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.756 { 00:17:56.756 "cntlid": 75, 00:17:56.756 "qid": 0, 00:17:56.756 "state": "enabled", 00:17:56.756 "thread": "nvmf_tgt_poll_group_000", 00:17:56.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.756 "listen_address": { 00:17:56.756 "trtype": "TCP", 00:17:56.756 "adrfam": "IPv4", 00:17:56.756 "traddr": "10.0.0.2", 00:17:56.756 "trsvcid": "4420" 00:17:56.756 }, 00:17:56.756 "peer_address": { 00:17:56.756 "trtype": "TCP", 00:17:56.756 "adrfam": "IPv4", 00:17:56.756 "traddr": "10.0.0.1", 00:17:56.756 "trsvcid": "57758" 00:17:56.756 }, 00:17:56.756 "auth": { 00:17:56.756 "state": "completed", 00:17:56.756 "digest": "sha384", 00:17:56.756 "dhgroup": "ffdhe4096" 00:17:56.756 } 00:17:56.756 } 00:17:56.756 ]' 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.756 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.017 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:57.017 17:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.585 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.846 17:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.106 00:17:58.106 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.106 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.106 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.366 { 00:17:58.366 "cntlid": 77, 00:17:58.366 "qid": 0, 00:17:58.366 "state": "enabled", 00:17:58.366 "thread": "nvmf_tgt_poll_group_000", 00:17:58.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.366 "listen_address": { 00:17:58.366 "trtype": "TCP", 00:17:58.366 "adrfam": "IPv4", 00:17:58.366 "traddr": "10.0.0.2", 00:17:58.366 "trsvcid": "4420" 00:17:58.366 }, 00:17:58.366 "peer_address": { 00:17:58.366 "trtype": "TCP", 00:17:58.366 "adrfam": "IPv4", 00:17:58.366 "traddr": "10.0.0.1", 00:17:58.366 "trsvcid": "57796" 00:17:58.366 }, 00:17:58.366 "auth": { 00:17:58.366 "state": "completed", 00:17:58.366 "digest": "sha384", 00:17:58.366 "dhgroup": "ffdhe4096" 00:17:58.366 } 00:17:58.366 } 00:17:58.366 ]' 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.366 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.627 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:58.627 17:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:59.198 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.460 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.721 00:17:59.721 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.721 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.721 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.982 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.982 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.982 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.982 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.982 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.982 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.982 { 00:17:59.982 "cntlid": 79, 00:17:59.982 "qid": 0, 00:17:59.982 "state": "enabled", 00:17:59.982 "thread": "nvmf_tgt_poll_group_000", 00:17:59.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.982 "listen_address": { 00:17:59.982 "trtype": "TCP", 00:17:59.982 "adrfam": "IPv4", 00:17:59.982 "traddr": "10.0.0.2", 00:17:59.982 "trsvcid": "4420" 00:17:59.982 }, 00:17:59.982 "peer_address": { 00:17:59.982 "trtype": "TCP", 00:17:59.982 "adrfam": "IPv4", 00:17:59.982 "traddr": "10.0.0.1", 00:17:59.982 "trsvcid": "57810" 00:17:59.982 }, 00:17:59.983 "auth": { 00:17:59.983 "state": "completed", 00:17:59.983 "digest": "sha384", 00:17:59.983 "dhgroup": "ffdhe4096" 00:17:59.983 } 00:17:59.983 } 00:17:59.983 ]' 00:17:59.983 17:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.983 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.242 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:00.242 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.812 17:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.073 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.334 00:18:01.334 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.334 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.334 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.595 { 00:18:01.595 "cntlid": 81, 00:18:01.595 "qid": 0, 00:18:01.595 "state": "enabled", 00:18:01.595 "thread": "nvmf_tgt_poll_group_000", 00:18:01.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.595 "listen_address": { 00:18:01.595 "trtype": "TCP", 00:18:01.595 "adrfam": "IPv4", 00:18:01.595 "traddr": "10.0.0.2", 00:18:01.595 "trsvcid": "4420" 00:18:01.595 }, 00:18:01.595 "peer_address": { 00:18:01.595 "trtype": "TCP", 00:18:01.595 "adrfam": "IPv4", 00:18:01.595 "traddr": "10.0.0.1", 00:18:01.595 "trsvcid": "57842" 00:18:01.595 }, 00:18:01.595 "auth": { 00:18:01.595 "state": "completed", 00:18:01.595 "digest": "sha384", 00:18:01.595 "dhgroup": "ffdhe6144" 00:18:01.595 } 00:18:01.595 } 00:18:01.595 ]' 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.595 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.855 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.855 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.855 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.855 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:01.855 17:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:02.425 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.425 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.425 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.425 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.684 17:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.944 00:18:02.944 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.944 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.944 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.204 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.204 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.204 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.204 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.204 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.204 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.204 { 00:18:03.204 "cntlid": 83, 00:18:03.204 "qid": 0, 00:18:03.204 "state": "enabled", 00:18:03.204 "thread": "nvmf_tgt_poll_group_000", 00:18:03.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.204 "listen_address": { 00:18:03.205 "trtype": "TCP", 00:18:03.205 "adrfam": "IPv4", 00:18:03.205 "traddr": "10.0.0.2", 00:18:03.205 "trsvcid": "4420" 00:18:03.205 }, 00:18:03.205 "peer_address": { 00:18:03.205 "trtype": "TCP", 00:18:03.205 "adrfam": "IPv4", 00:18:03.205 "traddr": "10.0.0.1", 00:18:03.205 "trsvcid": "35562" 00:18:03.205 }, 00:18:03.205 "auth": { 00:18:03.205 "state": "completed", 00:18:03.205 "digest": "sha384", 00:18:03.205 "dhgroup": "ffdhe6144" 00:18:03.205 } 00:18:03.205 } 00:18:03.205 ]' 00:18:03.205 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.205 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.205 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:03.465 17:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:04.405 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.405 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.405 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.406 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.666 00:18:04.666 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.666 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.666 17:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.927 { 00:18:04.927 "cntlid": 85, 00:18:04.927 "qid": 0, 00:18:04.927 "state": "enabled", 00:18:04.927 "thread": "nvmf_tgt_poll_group_000", 00:18:04.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.927 "listen_address": { 00:18:04.927 "trtype": "TCP", 00:18:04.927 "adrfam": "IPv4", 00:18:04.927 "traddr": "10.0.0.2", 00:18:04.927 "trsvcid": "4420" 00:18:04.927 }, 00:18:04.927 "peer_address": { 00:18:04.927 "trtype": "TCP", 00:18:04.927 "adrfam": "IPv4", 00:18:04.927 "traddr": "10.0.0.1", 00:18:04.927 "trsvcid": "35592" 00:18:04.927 }, 00:18:04.927 "auth": { 00:18:04.927 "state": "completed", 00:18:04.927 "digest": "sha384", 00:18:04.927 "dhgroup": "ffdhe6144" 00:18:04.927 } 00:18:04.927 } 00:18:04.927 ]' 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.927 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:05.188 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.128 17:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.128 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:06.128 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.129 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.389 00:18:06.389 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.389 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.389 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.697 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.697 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.697 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.698 { 00:18:06.698 "cntlid": 87, 00:18:06.698 "qid": 0, 00:18:06.698 "state": "enabled", 00:18:06.698 "thread": "nvmf_tgt_poll_group_000", 00:18:06.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.698 "listen_address": { 00:18:06.698 "trtype": "TCP", 00:18:06.698 "adrfam": "IPv4", 00:18:06.698 "traddr": "10.0.0.2", 00:18:06.698 "trsvcid": "4420" 00:18:06.698 }, 00:18:06.698 "peer_address": { 00:18:06.698 "trtype": "TCP", 00:18:06.698 "adrfam": "IPv4", 00:18:06.698 "traddr": "10.0.0.1", 00:18:06.698 "trsvcid": "35614" 00:18:06.698 }, 00:18:06.698 "auth": { 00:18:06.698 "state": "completed", 00:18:06.698 "digest": "sha384", 00:18:06.698 "dhgroup": "ffdhe6144" 00:18:06.698 } 00:18:06.698 } 00:18:06.698 ]' 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.698 17:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.958 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:06.958 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:07.527 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.788 17:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.358 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.358 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.358 { 00:18:08.358 "cntlid": 89, 00:18:08.358 "qid": 0, 00:18:08.358 "state": "enabled", 00:18:08.358 "thread": "nvmf_tgt_poll_group_000", 00:18:08.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.358 "listen_address": { 00:18:08.358 "trtype": "TCP", 00:18:08.358 "adrfam": "IPv4", 00:18:08.358 "traddr": "10.0.0.2", 00:18:08.358 "trsvcid": "4420" 00:18:08.358 }, 00:18:08.358 "peer_address": { 00:18:08.358 "trtype": "TCP", 00:18:08.358 "adrfam": "IPv4", 00:18:08.358 "traddr": "10.0.0.1", 00:18:08.358 "trsvcid": "35644" 00:18:08.358 }, 00:18:08.358 "auth": { 00:18:08.358 "state": "completed", 00:18:08.358 "digest": "sha384", 00:18:08.358 "dhgroup": "ffdhe8192" 00:18:08.358 } 00:18:08.358 } 00:18:08.358 ]' 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.620 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.880 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:08.880 17:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.489 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.844 17:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.135 00:18:10.135 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.135 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.135 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.396 { 00:18:10.396 "cntlid": 91, 00:18:10.396 "qid": 0, 00:18:10.396 "state": "enabled", 00:18:10.396 "thread": "nvmf_tgt_poll_group_000", 00:18:10.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.396 "listen_address": { 00:18:10.396 "trtype": "TCP", 00:18:10.396 "adrfam": "IPv4", 00:18:10.396 "traddr": "10.0.0.2", 00:18:10.396 "trsvcid": "4420" 00:18:10.396 }, 00:18:10.396 "peer_address": { 00:18:10.396 "trtype": "TCP", 00:18:10.396 "adrfam": "IPv4", 00:18:10.396 "traddr": "10.0.0.1", 00:18:10.396 "trsvcid": "35686" 00:18:10.396 }, 00:18:10.396 "auth": { 00:18:10.396 "state": "completed", 00:18:10.396 "digest": "sha384", 00:18:10.396 "dhgroup": "ffdhe8192" 00:18:10.396 } 00:18:10.396 } 00:18:10.396 ]' 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.396 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.656 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.656 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.656 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.656 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:10.656 17:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.598 17:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.169 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.169 { 00:18:12.169 "cntlid": 93, 00:18:12.169 "qid": 0, 00:18:12.169 "state": "enabled", 00:18:12.169 "thread": "nvmf_tgt_poll_group_000", 00:18:12.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.169 "listen_address": { 00:18:12.169 "trtype": "TCP", 00:18:12.169 "adrfam": "IPv4", 00:18:12.169 "traddr": "10.0.0.2", 00:18:12.169 "trsvcid": "4420" 00:18:12.169 }, 00:18:12.169 "peer_address": { 00:18:12.169 "trtype": "TCP", 00:18:12.169 "adrfam": "IPv4", 00:18:12.169 "traddr": "10.0.0.1", 00:18:12.169 "trsvcid": "35720" 00:18:12.169 }, 00:18:12.169 "auth": { 00:18:12.169 "state": "completed", 00:18:12.169 "digest": "sha384", 00:18:12.169 "dhgroup": "ffdhe8192" 00:18:12.169 } 00:18:12.169 } 00:18:12.169 ]' 00:18:12.169 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.170 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.170 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.429 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.429 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.429 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.429 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.429 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.690 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:12.690 17:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.260 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.520 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.091 00:18:14.091 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.091 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.091 17:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.091 { 00:18:14.091 "cntlid": 95, 00:18:14.091 "qid": 0, 00:18:14.091 "state": "enabled", 00:18:14.091 "thread": "nvmf_tgt_poll_group_000", 00:18:14.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.091 "listen_address": { 00:18:14.091 "trtype": "TCP", 00:18:14.091 "adrfam": "IPv4", 00:18:14.091 "traddr": "10.0.0.2", 00:18:14.091 "trsvcid": "4420" 00:18:14.091 }, 00:18:14.091 "peer_address": { 00:18:14.091 "trtype": "TCP", 00:18:14.091 "adrfam": "IPv4", 00:18:14.091 "traddr": "10.0.0.1", 00:18:14.091 "trsvcid": "37242" 00:18:14.091 }, 00:18:14.091 "auth": { 00:18:14.091 "state": "completed", 00:18:14.091 "digest": "sha384", 00:18:14.091 "dhgroup": "ffdhe8192" 00:18:14.091 } 00:18:14.091 } 00:18:14.091 ]' 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.091 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.352 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.352 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.352 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.352 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:14.352 17:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.292 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.552 00:18:15.552 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.552 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.552 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.814 { 00:18:15.814 "cntlid": 97, 00:18:15.814 "qid": 0, 00:18:15.814 "state": "enabled", 00:18:15.814 "thread": "nvmf_tgt_poll_group_000", 00:18:15.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.814 "listen_address": { 00:18:15.814 "trtype": "TCP", 00:18:15.814 "adrfam": "IPv4", 00:18:15.814 "traddr": "10.0.0.2", 00:18:15.814 "trsvcid": "4420" 00:18:15.814 }, 00:18:15.814 "peer_address": { 00:18:15.814 "trtype": "TCP", 00:18:15.814 "adrfam": "IPv4", 00:18:15.814 "traddr": "10.0.0.1", 00:18:15.814 "trsvcid": "37266" 00:18:15.814 }, 00:18:15.814 "auth": { 00:18:15.814 "state": "completed", 00:18:15.814 "digest": "sha512", 00:18:15.814 "dhgroup": "null" 00:18:15.814 } 00:18:15.814 } 00:18:15.814 ]' 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.814 17:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.075 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:16.075 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:16.645 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.646 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.906 17:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.167 00:18:17.167 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.167 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.167 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.428 { 00:18:17.428 "cntlid": 99, 00:18:17.428 "qid": 0, 00:18:17.428 "state": "enabled", 00:18:17.428 "thread": "nvmf_tgt_poll_group_000", 00:18:17.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.428 "listen_address": { 00:18:17.428 "trtype": "TCP", 00:18:17.428 "adrfam": "IPv4", 00:18:17.428 "traddr": "10.0.0.2", 00:18:17.428 "trsvcid": "4420" 00:18:17.428 }, 00:18:17.428 "peer_address": { 00:18:17.428 "trtype": "TCP", 00:18:17.428 "adrfam": "IPv4", 00:18:17.428 "traddr": "10.0.0.1", 00:18:17.428 "trsvcid": "37298" 00:18:17.428 }, 00:18:17.428 "auth": { 00:18:17.428 "state": "completed", 00:18:17.428 "digest": "sha512", 00:18:17.428 "dhgroup": "null" 00:18:17.428 } 00:18:17.428 } 00:18:17.428 ]' 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.428 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.690 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:17.690 17:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:18.262 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.262 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.263 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.263 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.263 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.263 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.263 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.263 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.524 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.786 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.786 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.046 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.046 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.046 { 00:18:19.046 "cntlid": 101, 00:18:19.046 "qid": 0, 00:18:19.046 "state": "enabled", 00:18:19.046 "thread": "nvmf_tgt_poll_group_000", 00:18:19.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.046 "listen_address": { 00:18:19.046 "trtype": "TCP", 00:18:19.046 "adrfam": "IPv4", 00:18:19.046 "traddr": "10.0.0.2", 00:18:19.046 "trsvcid": "4420" 00:18:19.046 }, 00:18:19.046 "peer_address": { 00:18:19.046 "trtype": "TCP", 00:18:19.046 "adrfam": "IPv4", 00:18:19.046 "traddr": "10.0.0.1", 00:18:19.046 "trsvcid": "37308" 00:18:19.046 }, 00:18:19.046 "auth": { 00:18:19.046 "state": "completed", 00:18:19.046 "digest": "sha512", 00:18:19.046 "dhgroup": "null" 00:18:19.046 } 00:18:19.046 } 00:18:19.046 ]' 00:18:19.046 17:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.046 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.307 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:19.307 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.877 17:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.138 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.400 00:18:20.400 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.400 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.400 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.661 { 00:18:20.661 "cntlid": 103, 00:18:20.661 "qid": 0, 00:18:20.661 "state": "enabled", 00:18:20.661 "thread": "nvmf_tgt_poll_group_000", 00:18:20.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.661 "listen_address": { 00:18:20.661 "trtype": "TCP", 00:18:20.661 "adrfam": "IPv4", 00:18:20.661 "traddr": "10.0.0.2", 00:18:20.661 "trsvcid": "4420" 00:18:20.661 }, 00:18:20.661 "peer_address": { 00:18:20.661 "trtype": "TCP", 00:18:20.661 "adrfam": "IPv4", 00:18:20.661 "traddr": "10.0.0.1", 00:18:20.661 "trsvcid": "37346" 00:18:20.661 }, 00:18:20.661 "auth": { 00:18:20.661 "state": "completed", 00:18:20.661 "digest": "sha512", 00:18:20.661 "dhgroup": "null" 00:18:20.661 } 00:18:20.661 } 00:18:20.661 ]' 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.661 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.922 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:20.922 17:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.494 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.754 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:21.754 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.754 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.754 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.754 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:21.754 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.755 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.015 00:18:22.015 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.015 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.015 17:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.015 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.015 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.015 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.015 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.015 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.015 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.015 { 00:18:22.015 "cntlid": 105, 00:18:22.015 "qid": 0, 00:18:22.015 "state": "enabled", 00:18:22.015 "thread": "nvmf_tgt_poll_group_000", 00:18:22.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.015 "listen_address": { 00:18:22.015 "trtype": "TCP", 00:18:22.015 "adrfam": "IPv4", 00:18:22.015 "traddr": "10.0.0.2", 00:18:22.015 "trsvcid": "4420" 00:18:22.015 }, 00:18:22.015 "peer_address": { 00:18:22.015 "trtype": "TCP", 00:18:22.015 "adrfam": "IPv4", 00:18:22.015 "traddr": "10.0.0.1", 00:18:22.015 "trsvcid": "37382" 00:18:22.015 }, 00:18:22.015 "auth": { 00:18:22.015 "state": "completed", 00:18:22.015 "digest": "sha512", 00:18:22.015 "dhgroup": "ffdhe2048" 00:18:22.015 } 00:18:22.015 } 00:18:22.015 ]' 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.276 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.536 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:22.536 17:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.108 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.367 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:23.367 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.367 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.367 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:23.367 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.367 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.368 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.627 00:18:23.627 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.627 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.627 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.627 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.887 { 00:18:23.887 "cntlid": 107, 00:18:23.887 "qid": 0, 00:18:23.887 "state": "enabled", 00:18:23.887 "thread": "nvmf_tgt_poll_group_000", 00:18:23.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.887 "listen_address": { 00:18:23.887 "trtype": "TCP", 00:18:23.887 "adrfam": "IPv4", 00:18:23.887 "traddr": "10.0.0.2", 00:18:23.887 "trsvcid": "4420" 00:18:23.887 }, 00:18:23.887 "peer_address": { 00:18:23.887 "trtype": "TCP", 00:18:23.887 "adrfam": "IPv4", 00:18:23.887 "traddr": "10.0.0.1", 00:18:23.887 "trsvcid": "52378" 00:18:23.887 }, 00:18:23.887 "auth": { 00:18:23.887 "state": "completed", 00:18:23.887 "digest": "sha512", 00:18:23.887 "dhgroup": "ffdhe2048" 00:18:23.887 } 00:18:23.887 } 00:18:23.887 ]' 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.887 17:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.148 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:24.148 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:24.717 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.977 17:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.237 00:18:25.237 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.237 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.237 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.498 { 00:18:25.498 "cntlid": 109, 00:18:25.498 "qid": 0, 00:18:25.498 "state": "enabled", 00:18:25.498 "thread": "nvmf_tgt_poll_group_000", 00:18:25.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.498 "listen_address": { 00:18:25.498 "trtype": "TCP", 00:18:25.498 "adrfam": "IPv4", 00:18:25.498 "traddr": "10.0.0.2", 00:18:25.498 "trsvcid": "4420" 00:18:25.498 }, 00:18:25.498 "peer_address": { 00:18:25.498 "trtype": "TCP", 00:18:25.498 "adrfam": "IPv4", 00:18:25.498 "traddr": "10.0.0.1", 00:18:25.498 "trsvcid": "52390" 00:18:25.498 }, 00:18:25.498 "auth": { 00:18:25.498 "state": "completed", 00:18:25.498 "digest": "sha512", 00:18:25.498 "dhgroup": "ffdhe2048" 00:18:25.498 } 00:18:25.498 } 00:18:25.498 ]' 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.498 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.758 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:25.758 17:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:26.329 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.589 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.849 00:18:26.850 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.850 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.850 17:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.110 { 00:18:27.110 "cntlid": 111, 00:18:27.110 "qid": 0, 00:18:27.110 "state": "enabled", 00:18:27.110 "thread": "nvmf_tgt_poll_group_000", 00:18:27.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.110 "listen_address": { 00:18:27.110 "trtype": "TCP", 00:18:27.110 "adrfam": "IPv4", 00:18:27.110 "traddr": "10.0.0.2", 00:18:27.110 "trsvcid": "4420" 00:18:27.110 }, 00:18:27.110 "peer_address": { 00:18:27.110 "trtype": "TCP", 00:18:27.110 "adrfam": "IPv4", 00:18:27.110 "traddr": "10.0.0.1", 00:18:27.110 "trsvcid": "52414" 00:18:27.110 }, 00:18:27.110 "auth": { 00:18:27.110 "state": "completed", 00:18:27.110 "digest": "sha512", 00:18:27.110 "dhgroup": "ffdhe2048" 00:18:27.110 } 00:18:27.110 } 00:18:27.110 ]' 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.110 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.370 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:27.370 17:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:27.940 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.200 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.201 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.201 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.201 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.201 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.201 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.460 00:18:28.460 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.460 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.460 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.720 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.720 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.720 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.720 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.721 { 00:18:28.721 "cntlid": 113, 00:18:28.721 "qid": 0, 00:18:28.721 "state": "enabled", 00:18:28.721 "thread": "nvmf_tgt_poll_group_000", 00:18:28.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.721 "listen_address": { 00:18:28.721 "trtype": "TCP", 00:18:28.721 "adrfam": "IPv4", 00:18:28.721 "traddr": "10.0.0.2", 00:18:28.721 "trsvcid": "4420" 00:18:28.721 }, 00:18:28.721 "peer_address": { 00:18:28.721 "trtype": "TCP", 00:18:28.721 "adrfam": "IPv4", 00:18:28.721 "traddr": "10.0.0.1", 00:18:28.721 "trsvcid": "52438" 00:18:28.721 }, 00:18:28.721 "auth": { 00:18:28.721 "state": "completed", 00:18:28.721 "digest": "sha512", 00:18:28.721 "dhgroup": "ffdhe3072" 00:18:28.721 } 00:18:28.721 } 00:18:28.721 ]' 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.721 17:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.981 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:28.981 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:29.922 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.922 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.923 17:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.182 00:18:30.182 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.182 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.182 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.443 { 00:18:30.443 "cntlid": 115, 00:18:30.443 "qid": 0, 00:18:30.443 "state": "enabled", 00:18:30.443 "thread": "nvmf_tgt_poll_group_000", 00:18:30.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.443 "listen_address": { 00:18:30.443 "trtype": "TCP", 00:18:30.443 "adrfam": "IPv4", 00:18:30.443 "traddr": "10.0.0.2", 00:18:30.443 "trsvcid": "4420" 00:18:30.443 }, 00:18:30.443 "peer_address": { 00:18:30.443 "trtype": "TCP", 00:18:30.443 "adrfam": "IPv4", 00:18:30.443 "traddr": "10.0.0.1", 00:18:30.443 "trsvcid": "52458" 00:18:30.443 }, 00:18:30.443 "auth": { 00:18:30.443 "state": "completed", 00:18:30.443 "digest": "sha512", 00:18:30.443 "dhgroup": "ffdhe3072" 00:18:30.443 } 00:18:30.443 } 00:18:30.443 ]' 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.443 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.703 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:30.703 17:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.273 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.534 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.795 00:18:31.795 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.795 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.795 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.054 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.054 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.054 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.054 17:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.054 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.054 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.054 { 00:18:32.054 "cntlid": 117, 00:18:32.054 "qid": 0, 00:18:32.054 "state": "enabled", 00:18:32.054 "thread": "nvmf_tgt_poll_group_000", 00:18:32.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.054 "listen_address": { 00:18:32.054 "trtype": "TCP", 00:18:32.054 "adrfam": "IPv4", 00:18:32.054 "traddr": "10.0.0.2", 00:18:32.054 "trsvcid": "4420" 00:18:32.054 }, 00:18:32.054 "peer_address": { 00:18:32.054 "trtype": "TCP", 00:18:32.054 "adrfam": "IPv4", 00:18:32.054 "traddr": "10.0.0.1", 00:18:32.054 "trsvcid": "52480" 00:18:32.054 }, 00:18:32.054 "auth": { 00:18:32.054 "state": "completed", 00:18:32.054 "digest": "sha512", 00:18:32.054 "dhgroup": "ffdhe3072" 00:18:32.054 } 00:18:32.054 } 00:18:32.054 ]' 00:18:32.054 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.054 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.054 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.054 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.055 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.055 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.055 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.055 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.314 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:32.314 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:32.885 17:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.144 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:33.144 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.145 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.405 00:18:33.405 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.405 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.405 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.665 { 00:18:33.665 "cntlid": 119, 00:18:33.665 "qid": 0, 00:18:33.665 "state": "enabled", 00:18:33.665 "thread": "nvmf_tgt_poll_group_000", 00:18:33.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.665 "listen_address": { 00:18:33.665 "trtype": "TCP", 00:18:33.665 "adrfam": "IPv4", 00:18:33.665 "traddr": "10.0.0.2", 00:18:33.665 "trsvcid": "4420" 00:18:33.665 }, 00:18:33.665 "peer_address": { 00:18:33.665 "trtype": "TCP", 00:18:33.665 "adrfam": "IPv4", 00:18:33.665 "traddr": "10.0.0.1", 00:18:33.665 "trsvcid": "46956" 00:18:33.665 }, 00:18:33.665 "auth": { 00:18:33.665 "state": "completed", 00:18:33.665 "digest": "sha512", 00:18:33.665 "dhgroup": "ffdhe3072" 00:18:33.665 } 00:18:33.665 } 00:18:33.665 ]' 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.665 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.925 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:33.925 17:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.495 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.755 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:34.755 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.755 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.755 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:34.755 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.755 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.756 17:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.016 00:18:35.016 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.016 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.016 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.276 { 00:18:35.276 "cntlid": 121, 00:18:35.276 "qid": 0, 00:18:35.276 "state": "enabled", 00:18:35.276 "thread": "nvmf_tgt_poll_group_000", 00:18:35.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.276 "listen_address": { 00:18:35.276 "trtype": "TCP", 00:18:35.276 "adrfam": "IPv4", 00:18:35.276 "traddr": "10.0.0.2", 00:18:35.276 "trsvcid": "4420" 00:18:35.276 }, 00:18:35.276 "peer_address": { 00:18:35.276 "trtype": "TCP", 00:18:35.276 "adrfam": "IPv4", 00:18:35.276 "traddr": "10.0.0.1", 00:18:35.276 "trsvcid": "46978" 00:18:35.276 }, 00:18:35.276 "auth": { 00:18:35.276 "state": "completed", 00:18:35.276 "digest": "sha512", 00:18:35.276 "dhgroup": "ffdhe4096" 00:18:35.276 } 00:18:35.276 } 00:18:35.276 ]' 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.276 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.536 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:35.536 17:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.106 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.107 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.367 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.627 00:18:36.627 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.627 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.627 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.887 { 00:18:36.887 "cntlid": 123, 00:18:36.887 "qid": 0, 00:18:36.887 "state": "enabled", 00:18:36.887 "thread": "nvmf_tgt_poll_group_000", 00:18:36.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.887 "listen_address": { 00:18:36.887 "trtype": "TCP", 00:18:36.887 "adrfam": "IPv4", 00:18:36.887 "traddr": "10.0.0.2", 00:18:36.887 "trsvcid": "4420" 00:18:36.887 }, 00:18:36.887 "peer_address": { 00:18:36.887 "trtype": "TCP", 00:18:36.887 "adrfam": "IPv4", 00:18:36.887 "traddr": "10.0.0.1", 00:18:36.887 "trsvcid": "47010" 00:18:36.887 }, 00:18:36.887 "auth": { 00:18:36.887 "state": "completed", 00:18:36.887 "digest": "sha512", 00:18:36.887 "dhgroup": "ffdhe4096" 00:18:36.887 } 00:18:36.887 } 00:18:36.887 ]' 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.887 17:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.887 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.887 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.887 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.147 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:37.148 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:37.717 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.717 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.717 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.717 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.978 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.978 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.978 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:37.978 17:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.978 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.239 00:18:38.239 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.239 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.239 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.500 { 00:18:38.500 "cntlid": 125, 00:18:38.500 "qid": 0, 00:18:38.500 "state": "enabled", 00:18:38.500 "thread": "nvmf_tgt_poll_group_000", 00:18:38.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.500 "listen_address": { 00:18:38.500 "trtype": "TCP", 00:18:38.500 "adrfam": "IPv4", 00:18:38.500 "traddr": "10.0.0.2", 00:18:38.500 "trsvcid": "4420" 00:18:38.500 }, 00:18:38.500 "peer_address": { 00:18:38.500 "trtype": "TCP", 00:18:38.500 "adrfam": "IPv4", 00:18:38.500 "traddr": "10.0.0.1", 00:18:38.500 "trsvcid": "47036" 00:18:38.500 }, 00:18:38.500 "auth": { 00:18:38.500 "state": "completed", 00:18:38.500 "digest": "sha512", 00:18:38.500 "dhgroup": "ffdhe4096" 00:18:38.500 } 00:18:38.500 } 00:18:38.500 ]' 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.500 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.760 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.760 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.760 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.760 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:38.760 17:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.701 17:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.962 00:18:39.962 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.962 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.962 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.224 { 00:18:40.224 "cntlid": 127, 00:18:40.224 "qid": 0, 00:18:40.224 "state": "enabled", 00:18:40.224 "thread": "nvmf_tgt_poll_group_000", 00:18:40.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.224 "listen_address": { 00:18:40.224 "trtype": "TCP", 00:18:40.224 "adrfam": "IPv4", 00:18:40.224 "traddr": "10.0.0.2", 00:18:40.224 "trsvcid": "4420" 00:18:40.224 }, 00:18:40.224 "peer_address": { 00:18:40.224 "trtype": "TCP", 00:18:40.224 "adrfam": "IPv4", 00:18:40.224 "traddr": "10.0.0.1", 00:18:40.224 "trsvcid": "47064" 00:18:40.224 }, 00:18:40.224 "auth": { 00:18:40.224 "state": "completed", 00:18:40.224 "digest": "sha512", 00:18:40.224 "dhgroup": "ffdhe4096" 00:18:40.224 } 00:18:40.224 } 00:18:40.224 ]' 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.224 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.486 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:40.486 17:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.056 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.315 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.576 00:18:41.576 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.576 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.576 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.836 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.836 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.836 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.836 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.836 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.836 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.836 { 00:18:41.836 "cntlid": 129, 00:18:41.836 "qid": 0, 00:18:41.836 "state": "enabled", 00:18:41.836 "thread": "nvmf_tgt_poll_group_000", 00:18:41.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.836 "listen_address": { 00:18:41.836 "trtype": "TCP", 00:18:41.836 "adrfam": "IPv4", 00:18:41.836 "traddr": "10.0.0.2", 00:18:41.836 "trsvcid": "4420" 00:18:41.837 }, 00:18:41.837 "peer_address": { 00:18:41.837 "trtype": "TCP", 00:18:41.837 "adrfam": "IPv4", 00:18:41.837 "traddr": "10.0.0.1", 00:18:41.837 "trsvcid": "47092" 00:18:41.837 }, 00:18:41.837 "auth": { 00:18:41.837 "state": "completed", 00:18:41.837 "digest": "sha512", 00:18:41.837 "dhgroup": "ffdhe6144" 00:18:41.837 } 00:18:41.837 } 00:18:41.837 ]' 00:18:41.837 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.837 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.837 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.837 17:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.837 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.098 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.098 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.098 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.098 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:42.098 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:43.037 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.038 17:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.038 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.297 00:18:43.297 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.297 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.297 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.558 { 00:18:43.558 "cntlid": 131, 00:18:43.558 "qid": 0, 00:18:43.558 "state": "enabled", 00:18:43.558 "thread": "nvmf_tgt_poll_group_000", 00:18:43.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.558 "listen_address": { 00:18:43.558 "trtype": "TCP", 00:18:43.558 "adrfam": "IPv4", 00:18:43.558 "traddr": "10.0.0.2", 00:18:43.558 "trsvcid": "4420" 00:18:43.558 }, 00:18:43.558 "peer_address": { 00:18:43.558 "trtype": "TCP", 00:18:43.558 "adrfam": "IPv4", 00:18:43.558 "traddr": "10.0.0.1", 00:18:43.558 "trsvcid": "33836" 00:18:43.558 }, 00:18:43.558 "auth": { 00:18:43.558 "state": "completed", 00:18:43.558 "digest": "sha512", 00:18:43.558 "dhgroup": "ffdhe6144" 00:18:43.558 } 00:18:43.558 } 00:18:43.558 ]' 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.558 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.819 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:43.819 17:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:44.390 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.650 17:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.911 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.172 { 00:18:45.172 "cntlid": 133, 00:18:45.172 "qid": 0, 00:18:45.172 "state": "enabled", 00:18:45.172 "thread": "nvmf_tgt_poll_group_000", 00:18:45.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.172 "listen_address": { 00:18:45.172 "trtype": "TCP", 00:18:45.172 "adrfam": "IPv4", 00:18:45.172 "traddr": "10.0.0.2", 00:18:45.172 "trsvcid": "4420" 00:18:45.172 }, 00:18:45.172 "peer_address": { 00:18:45.172 "trtype": "TCP", 00:18:45.172 "adrfam": "IPv4", 00:18:45.172 "traddr": "10.0.0.1", 00:18:45.172 "trsvcid": "33876" 00:18:45.172 }, 00:18:45.172 "auth": { 00:18:45.172 "state": "completed", 00:18:45.172 "digest": "sha512", 00:18:45.172 "dhgroup": "ffdhe6144" 00:18:45.172 } 00:18:45.172 } 00:18:45.172 ]' 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.172 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:45.432 17:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.373 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.634 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.895 17:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.895 { 00:18:46.895 "cntlid": 135, 00:18:46.895 "qid": 0, 00:18:46.895 "state": "enabled", 00:18:46.895 "thread": "nvmf_tgt_poll_group_000", 00:18:46.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.895 "listen_address": { 00:18:46.895 "trtype": "TCP", 00:18:46.895 "adrfam": "IPv4", 00:18:46.895 "traddr": "10.0.0.2", 00:18:46.895 "trsvcid": "4420" 00:18:46.895 }, 00:18:46.895 "peer_address": { 00:18:46.895 "trtype": "TCP", 00:18:46.895 "adrfam": "IPv4", 00:18:46.895 "traddr": "10.0.0.1", 00:18:46.895 "trsvcid": "33898" 00:18:46.895 }, 00:18:46.895 "auth": { 00:18:46.895 "state": "completed", 00:18:46.895 "digest": "sha512", 00:18:46.895 "dhgroup": "ffdhe6144" 00:18:46.895 } 00:18:46.895 } 00:18:46.895 ]' 00:18:46.895 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.895 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.895 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:47.156 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.100 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.101 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.101 17:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.101 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.826 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.826 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.826 { 00:18:48.826 "cntlid": 137, 00:18:48.826 "qid": 0, 00:18:48.826 "state": "enabled", 00:18:48.826 "thread": "nvmf_tgt_poll_group_000", 00:18:48.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.826 "listen_address": { 00:18:48.826 "trtype": "TCP", 00:18:48.826 "adrfam": "IPv4", 00:18:48.826 "traddr": "10.0.0.2", 00:18:48.826 "trsvcid": "4420" 00:18:48.826 }, 00:18:48.826 "peer_address": { 00:18:48.826 "trtype": "TCP", 00:18:48.826 "adrfam": "IPv4", 00:18:48.827 "traddr": "10.0.0.1", 00:18:48.827 "trsvcid": "33920" 00:18:48.827 }, 00:18:48.827 "auth": { 00:18:48.827 "state": "completed", 00:18:48.827 "digest": "sha512", 00:18:48.827 "dhgroup": "ffdhe8192" 00:18:48.827 } 00:18:48.827 } 00:18:48.827 ]' 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.827 17:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.110 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:49.110 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:49.680 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.941 17:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.512 00:18:50.512 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.512 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.512 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.512 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.513 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.513 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.513 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.513 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.513 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.513 { 00:18:50.513 "cntlid": 139, 00:18:50.513 "qid": 0, 00:18:50.513 "state": "enabled", 00:18:50.513 "thread": "nvmf_tgt_poll_group_000", 00:18:50.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.513 "listen_address": { 00:18:50.513 "trtype": "TCP", 00:18:50.513 "adrfam": "IPv4", 00:18:50.513 "traddr": "10.0.0.2", 00:18:50.513 "trsvcid": "4420" 00:18:50.513 }, 00:18:50.513 "peer_address": { 00:18:50.513 "trtype": "TCP", 00:18:50.513 "adrfam": "IPv4", 00:18:50.513 "traddr": "10.0.0.1", 00:18:50.513 "trsvcid": "33946" 00:18:50.513 }, 00:18:50.513 "auth": { 00:18:50.513 "state": "completed", 00:18:50.513 "digest": "sha512", 00:18:50.513 "dhgroup": "ffdhe8192" 00:18:50.513 } 00:18:50.513 } 00:18:50.513 ]' 00:18:50.513 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.774 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.035 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:51.035 17:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: --dhchap-ctrl-secret DHHC-1:02:ZmE1NjIzZGEwZjJhNWU1OGZjNmMxN2ZlZWU1MzFkMzY3NzUzOTNjNWU0YWFmNzY3kCI90w==: 00:18:51.604 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:51.605 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.866 17:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.437 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.437 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.437 { 00:18:52.437 "cntlid": 141, 00:18:52.438 "qid": 0, 00:18:52.438 "state": "enabled", 00:18:52.438 "thread": "nvmf_tgt_poll_group_000", 00:18:52.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.438 "listen_address": { 00:18:52.438 "trtype": "TCP", 00:18:52.438 "adrfam": "IPv4", 00:18:52.438 "traddr": "10.0.0.2", 00:18:52.438 "trsvcid": "4420" 00:18:52.438 }, 00:18:52.438 "peer_address": { 00:18:52.438 "trtype": "TCP", 00:18:52.438 "adrfam": "IPv4", 00:18:52.438 "traddr": "10.0.0.1", 00:18:52.438 "trsvcid": "33968" 00:18:52.438 }, 00:18:52.438 "auth": { 00:18:52.438 "state": "completed", 00:18:52.438 "digest": "sha512", 00:18:52.438 "dhgroup": "ffdhe8192" 00:18:52.438 } 00:18:52.438 } 00:18:52.438 ]' 00:18:52.438 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.438 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.438 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:52.699 17:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:01:YzRhOWNhYzkwMGI0Y2M2MzMxNWVhNDg3Nzc4N2IwY2SxLV64: 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.643 17:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.214 00:18:54.214 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.214 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.214 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.214 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.474 { 00:18:54.474 "cntlid": 143, 00:18:54.474 "qid": 0, 00:18:54.474 "state": "enabled", 00:18:54.474 "thread": "nvmf_tgt_poll_group_000", 00:18:54.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.474 "listen_address": { 00:18:54.474 "trtype": "TCP", 00:18:54.474 "adrfam": "IPv4", 00:18:54.474 "traddr": "10.0.0.2", 00:18:54.474 "trsvcid": "4420" 00:18:54.474 }, 00:18:54.474 "peer_address": { 00:18:54.474 "trtype": "TCP", 00:18:54.474 "adrfam": "IPv4", 00:18:54.474 "traddr": "10.0.0.1", 00:18:54.474 "trsvcid": "51464" 00:18:54.474 }, 00:18:54.474 "auth": { 00:18:54.474 "state": "completed", 00:18:54.474 "digest": "sha512", 00:18:54.474 "dhgroup": "ffdhe8192" 00:18:54.474 } 00:18:54.474 } 00:18:54.474 ]' 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.474 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.735 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:54.735 17:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:55.306 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:55.307 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:55.307 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.307 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.307 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.567 17:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.137 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.137 { 00:18:56.137 "cntlid": 145, 00:18:56.137 "qid": 0, 00:18:56.137 "state": "enabled", 00:18:56.137 "thread": "nvmf_tgt_poll_group_000", 00:18:56.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.137 "listen_address": { 00:18:56.137 "trtype": "TCP", 00:18:56.137 "adrfam": "IPv4", 00:18:56.137 "traddr": "10.0.0.2", 00:18:56.137 "trsvcid": "4420" 00:18:56.137 }, 00:18:56.137 "peer_address": { 00:18:56.137 "trtype": "TCP", 00:18:56.137 "adrfam": "IPv4", 00:18:56.137 "traddr": "10.0.0.1", 00:18:56.137 "trsvcid": "51494" 00:18:56.137 }, 00:18:56.137 "auth": { 00:18:56.137 "state": "completed", 00:18:56.137 "digest": "sha512", 00:18:56.137 "dhgroup": "ffdhe8192" 00:18:56.137 } 00:18:56.137 } 00:18:56.137 ]' 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.137 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.397 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.397 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.397 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.397 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.397 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.397 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.657 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:56.657 17:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Yjk4MmVkY2M2OGJjMzBhNDBiNzE5MjM2NjAzY2RiMzMzMjYwMzM3YzEzMmZkMDliJXWQ+w==: --dhchap-ctrl-secret DHHC-1:03:YjQ5YTNkNmIwMDE5NDEyMzFkMzFlNWFhYzg0MWU5MWJlZTljM2JjNTg3NWU1OWFjMTU5N2Q1ZWEzN2Y2NmI4NSnbx3U=: 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:57.227 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:57.796 request: 00:18:57.796 { 00:18:57.796 "name": "nvme0", 00:18:57.796 "trtype": "tcp", 00:18:57.796 "traddr": "10.0.0.2", 00:18:57.796 "adrfam": "ipv4", 00:18:57.796 "trsvcid": "4420", 00:18:57.796 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:57.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.796 "prchk_reftag": false, 00:18:57.796 "prchk_guard": false, 00:18:57.796 "hdgst": false, 00:18:57.796 "ddgst": false, 00:18:57.796 "dhchap_key": "key2", 00:18:57.796 "allow_unrecognized_csi": false, 00:18:57.796 "method": "bdev_nvme_attach_controller", 00:18:57.796 "req_id": 1 00:18:57.796 } 00:18:57.796 Got JSON-RPC error response 00:18:57.796 response: 00:18:57.796 { 00:18:57.796 "code": -5, 00:18:57.796 "message": "Input/output error" 00:18:57.796 } 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:57.796 17:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:58.057 request: 00:18:58.057 { 00:18:58.057 "name": "nvme0", 00:18:58.057 "trtype": "tcp", 00:18:58.057 "traddr": "10.0.0.2", 00:18:58.057 "adrfam": "ipv4", 00:18:58.057 "trsvcid": "4420", 00:18:58.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:58.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.057 "prchk_reftag": false, 00:18:58.057 "prchk_guard": false, 00:18:58.057 "hdgst": false, 00:18:58.057 "ddgst": false, 00:18:58.057 "dhchap_key": "key1", 00:18:58.057 "dhchap_ctrlr_key": "ckey2", 00:18:58.057 "allow_unrecognized_csi": false, 00:18:58.057 "method": "bdev_nvme_attach_controller", 00:18:58.057 "req_id": 1 00:18:58.057 } 00:18:58.057 Got JSON-RPC error response 00:18:58.057 response: 00:18:58.057 { 00:18:58.057 "code": -5, 00:18:58.057 "message": "Input/output error" 00:18:58.057 } 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.057 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.628 request: 00:18:58.628 { 00:18:58.628 "name": "nvme0", 00:18:58.628 "trtype": "tcp", 00:18:58.628 "traddr": "10.0.0.2", 00:18:58.628 "adrfam": "ipv4", 00:18:58.628 "trsvcid": "4420", 00:18:58.628 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:58.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.628 "prchk_reftag": false, 00:18:58.628 "prchk_guard": false, 00:18:58.628 "hdgst": false, 00:18:58.628 "ddgst": false, 00:18:58.629 "dhchap_key": "key1", 00:18:58.629 "dhchap_ctrlr_key": "ckey1", 00:18:58.629 "allow_unrecognized_csi": false, 00:18:58.629 "method": "bdev_nvme_attach_controller", 00:18:58.629 "req_id": 1 00:18:58.629 } 00:18:58.629 Got JSON-RPC error response 00:18:58.629 response: 00:18:58.629 { 00:18:58.629 "code": -5, 00:18:58.629 "message": "Input/output error" 00:18:58.629 } 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1934693 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1934693 ']' 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1934693 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1934693 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1934693' 00:18:58.629 killing process with pid 1934693 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1934693 00:18:58.629 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1934693 00:18:58.889 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1961015 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1961015 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1961015 ']' 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.890 17:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1961015 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1961015 ']' 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.831 17:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.831 null0 00:18:59.831 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.831 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:59.831 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1NU 00:18:59.831 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.831 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.iDG ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iDG 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.jHU 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.OYr ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYr 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.PKp 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.BM9 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BM9 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rst 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.113 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.684 nvme0n1 00:19:00.684 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.684 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.684 17:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.946 { 00:19:00.946 "cntlid": 1, 00:19:00.946 "qid": 0, 00:19:00.946 "state": "enabled", 00:19:00.946 "thread": "nvmf_tgt_poll_group_000", 00:19:00.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.946 "listen_address": { 00:19:00.946 "trtype": "TCP", 00:19:00.946 "adrfam": "IPv4", 00:19:00.946 "traddr": "10.0.0.2", 00:19:00.946 "trsvcid": "4420" 00:19:00.946 }, 00:19:00.946 "peer_address": { 00:19:00.946 "trtype": "TCP", 00:19:00.946 "adrfam": "IPv4", 00:19:00.946 "traddr": "10.0.0.1", 00:19:00.946 "trsvcid": "51530" 00:19:00.946 }, 00:19:00.946 "auth": { 00:19:00.946 "state": "completed", 00:19:00.946 "digest": "sha512", 00:19:00.946 "dhgroup": "ffdhe8192" 00:19:00.946 } 00:19:00.946 } 00:19:00.946 ]' 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.946 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:19:01.215 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:19:02.154 17:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.154 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.414 request: 00:19:02.414 { 00:19:02.414 "name": "nvme0", 00:19:02.414 "trtype": "tcp", 00:19:02.414 "traddr": "10.0.0.2", 00:19:02.414 "adrfam": "ipv4", 00:19:02.414 "trsvcid": "4420", 00:19:02.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.414 "prchk_reftag": false, 00:19:02.414 "prchk_guard": false, 00:19:02.414 "hdgst": false, 00:19:02.414 "ddgst": false, 00:19:02.414 "dhchap_key": "key3", 00:19:02.414 "allow_unrecognized_csi": false, 00:19:02.414 "method": "bdev_nvme_attach_controller", 00:19:02.414 "req_id": 1 00:19:02.414 } 00:19:02.414 Got JSON-RPC error response 00:19:02.414 response: 00:19:02.414 { 00:19:02.414 "code": -5, 00:19:02.414 "message": "Input/output error" 00:19:02.414 } 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:02.414 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.674 request: 00:19:02.674 { 00:19:02.674 "name": "nvme0", 00:19:02.674 "trtype": "tcp", 00:19:02.674 "traddr": "10.0.0.2", 00:19:02.674 "adrfam": "ipv4", 00:19:02.674 "trsvcid": "4420", 00:19:02.674 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.674 "prchk_reftag": false, 00:19:02.674 "prchk_guard": false, 00:19:02.674 "hdgst": false, 00:19:02.674 "ddgst": false, 00:19:02.674 "dhchap_key": "key3", 00:19:02.674 "allow_unrecognized_csi": false, 00:19:02.674 "method": "bdev_nvme_attach_controller", 00:19:02.674 "req_id": 1 00:19:02.674 } 00:19:02.674 Got JSON-RPC error response 00:19:02.674 response: 00:19:02.674 { 00:19:02.674 "code": -5, 00:19:02.674 "message": "Input/output error" 00:19:02.674 } 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.674 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:02.934 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:02.935 17:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:03.195 request: 00:19:03.195 { 00:19:03.195 "name": "nvme0", 00:19:03.195 "trtype": "tcp", 00:19:03.195 "traddr": "10.0.0.2", 00:19:03.195 "adrfam": "ipv4", 00:19:03.195 "trsvcid": "4420", 00:19:03.195 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:03.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.195 "prchk_reftag": false, 00:19:03.195 "prchk_guard": false, 00:19:03.195 "hdgst": false, 00:19:03.195 "ddgst": false, 00:19:03.195 "dhchap_key": "key0", 00:19:03.195 "dhchap_ctrlr_key": "key1", 00:19:03.195 "allow_unrecognized_csi": false, 00:19:03.195 "method": "bdev_nvme_attach_controller", 00:19:03.195 "req_id": 1 00:19:03.195 } 00:19:03.195 Got JSON-RPC error response 00:19:03.195 response: 00:19:03.195 { 00:19:03.195 "code": -5, 00:19:03.195 "message": "Input/output error" 00:19:03.195 } 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:03.195 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:03.455 nvme0n1 00:19:03.455 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:03.455 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.455 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:03.716 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:03.717 17:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:04.658 nvme0n1 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.658 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.918 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:04.918 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:04.918 17:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.918 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.918 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:19:04.919 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: --dhchap-ctrl-secret DHHC-1:03:OWY5NWM3MjY4N2MxZThjODk3MjU0ZGQwYzVlNzIwM2I5ZTI2ZjY0ZTAyNTA2YmI4MTcyMTZhYmNhMjQzMzYyMOXl0i0=: 00:19:05.488 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:05.488 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.489 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.748 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:05.748 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:05.748 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:05.748 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:05.748 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.749 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:05.749 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.749 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:05.749 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:05.749 17:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:06.318 request: 00:19:06.318 { 00:19:06.318 "name": "nvme0", 00:19:06.318 "trtype": "tcp", 00:19:06.318 "traddr": "10.0.0.2", 00:19:06.318 "adrfam": "ipv4", 00:19:06.318 "trsvcid": "4420", 00:19:06.318 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.319 "prchk_reftag": false, 00:19:06.319 "prchk_guard": false, 00:19:06.319 "hdgst": false, 00:19:06.319 "ddgst": false, 00:19:06.319 "dhchap_key": "key1", 00:19:06.319 "allow_unrecognized_csi": false, 00:19:06.319 "method": "bdev_nvme_attach_controller", 00:19:06.319 "req_id": 1 00:19:06.319 } 00:19:06.319 Got JSON-RPC error response 00:19:06.319 response: 00:19:06.319 { 00:19:06.319 "code": -5, 00:19:06.319 "message": "Input/output error" 00:19:06.319 } 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.319 17:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.889 nvme0n1 00:19:06.889 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:06.889 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:06.889 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.149 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.149 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.149 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:07.410 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:07.671 nvme0n1 00:19:07.671 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:07.671 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:07.671 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.932 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.932 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.932 17:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: '' 2s 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: ]] 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGEwMGE5MWRlMzVlODAxODI3ZTZlOTcwMzZlZjk5Mmb1mo6+: 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:07.932 17:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: 2s 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: ]] 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTc0MjA2NjA0ODkwNzZlNzJmZGZjODBmMTczMTlhZTcyMjJkNmE3M2U4ZTRjZDU1P3Hvaw==: 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:10.476 17:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:12.387 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:12.956 nvme0n1 00:19:12.956 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:12.956 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.956 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.956 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:12.956 17:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:13.524 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:13.782 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.783 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:13.783 17:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:14.350 request: 00:19:14.350 { 00:19:14.350 "name": "nvme0", 00:19:14.350 "dhchap_key": "key1", 00:19:14.350 "dhchap_ctrlr_key": "key3", 00:19:14.350 "method": "bdev_nvme_set_keys", 00:19:14.350 "req_id": 1 00:19:14.350 } 00:19:14.350 Got JSON-RPC error response 00:19:14.350 response: 00:19:14.350 { 00:19:14.350 "code": -13, 00:19:14.350 "message": "Permission denied" 00:19:14.350 } 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:14.350 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.609 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:14.609 17:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:15.547 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:15.547 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:15.547 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:15.806 17:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:16.374 nvme0n1 00:19:16.374 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.375 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.943 request: 00:19:16.943 { 00:19:16.943 "name": "nvme0", 00:19:16.943 "dhchap_key": "key2", 00:19:16.943 "dhchap_ctrlr_key": "key0", 00:19:16.943 "method": "bdev_nvme_set_keys", 00:19:16.944 "req_id": 1 00:19:16.944 } 00:19:16.944 Got JSON-RPC error response 00:19:16.944 response: 00:19:16.944 { 00:19:16.944 "code": -13, 00:19:16.944 "message": "Permission denied" 00:19:16.944 } 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:16.944 17:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.209 17:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:17.209 17:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:18.145 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:18.145 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:18.145 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1935041 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1935041 ']' 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1935041 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1935041 00:19:18.403 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.404 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.404 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1935041' 00:19:18.404 killing process with pid 1935041 00:19:18.404 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1935041 00:19:18.404 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1935041 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.664 rmmod nvme_tcp 00:19:18.664 rmmod nvme_fabrics 00:19:18.664 rmmod nvme_keyring 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1961015 ']' 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1961015 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1961015 ']' 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1961015 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961015 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961015' 00:19:18.664 killing process with pid 1961015 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1961015 00:19:18.664 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1961015 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.924 17:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1NU /tmp/spdk.key-sha256.jHU /tmp/spdk.key-sha384.PKp /tmp/spdk.key-sha512.Rst /tmp/spdk.key-sha512.iDG /tmp/spdk.key-sha384.OYr /tmp/spdk.key-sha256.BM9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:20.833 00:19:20.833 real 2m37.013s 00:19:20.833 user 5m53.087s 00:19:20.833 sys 0m24.821s 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.833 ************************************ 00:19:20.833 END TEST nvmf_auth_target 00:19:20.833 ************************************ 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.833 17:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.095 ************************************ 00:19:21.095 START TEST nvmf_bdevio_no_huge 00:19:21.095 ************************************ 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:21.095 * Looking for test storage... 00:19:21.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.095 --rc genhtml_branch_coverage=1 00:19:21.095 --rc genhtml_function_coverage=1 00:19:21.095 --rc genhtml_legend=1 00:19:21.095 --rc geninfo_all_blocks=1 00:19:21.095 --rc geninfo_unexecuted_blocks=1 00:19:21.095 00:19:21.095 ' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.095 --rc genhtml_branch_coverage=1 00:19:21.095 --rc genhtml_function_coverage=1 00:19:21.095 --rc genhtml_legend=1 00:19:21.095 --rc geninfo_all_blocks=1 00:19:21.095 --rc geninfo_unexecuted_blocks=1 00:19:21.095 00:19:21.095 ' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.095 --rc genhtml_branch_coverage=1 00:19:21.095 --rc genhtml_function_coverage=1 00:19:21.095 --rc genhtml_legend=1 00:19:21.095 --rc geninfo_all_blocks=1 00:19:21.095 --rc geninfo_unexecuted_blocks=1 00:19:21.095 00:19:21.095 ' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.095 --rc genhtml_branch_coverage=1 00:19:21.095 --rc genhtml_function_coverage=1 00:19:21.095 --rc genhtml_legend=1 00:19:21.095 --rc geninfo_all_blocks=1 00:19:21.095 --rc geninfo_unexecuted_blocks=1 00:19:21.095 00:19:21.095 ' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.095 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.096 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.356 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:21.356 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:21.357 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.357 17:02:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:29.490 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.490 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:29.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:29.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:29.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:19:29.491 00:19:29.491 --- 10.0.0.2 ping statistics --- 00:19:29.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.491 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:19:29.491 00:19:29.491 --- 10.0.0.1 ping statistics --- 00:19:29.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.491 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1969170 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1969170 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1969170 ']' 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.491 17:02:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.491 [2024-11-20 17:02:20.857280] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:19:29.491 [2024-11-20 17:02:20.857353] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:29.491 [2024-11-20 17:02:20.965646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.491 [2024-11-20 17:02:21.026376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.491 [2024-11-20 17:02:21.026427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.491 [2024-11-20 17:02:21.026436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.491 [2024-11-20 17:02:21.026447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.491 [2024-11-20 17:02:21.026454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.491 [2024-11-20 17:02:21.028327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:29.491 [2024-11-20 17:02:21.028555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:29.491 [2024-11-20 17:02:21.028706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:29.491 [2024-11-20 17:02:21.028709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 [2024-11-20 17:02:21.735647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 Malloc0 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 [2024-11-20 17:02:21.789582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:29.753 { 00:19:29.753 "params": { 00:19:29.753 "name": "Nvme$subsystem", 00:19:29.753 "trtype": "$TEST_TRANSPORT", 00:19:29.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.753 "adrfam": "ipv4", 00:19:29.753 "trsvcid": "$NVMF_PORT", 00:19:29.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.753 "hdgst": ${hdgst:-false}, 00:19:29.753 "ddgst": ${ddgst:-false} 00:19:29.753 }, 00:19:29.753 "method": "bdev_nvme_attach_controller" 00:19:29.753 } 00:19:29.753 EOF 00:19:29.753 )") 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:29.753 17:02:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:29.753 "params": { 00:19:29.753 "name": "Nvme1", 00:19:29.753 "trtype": "tcp", 00:19:29.753 "traddr": "10.0.0.2", 00:19:29.753 "adrfam": "ipv4", 00:19:29.753 "trsvcid": "4420", 00:19:29.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.753 "hdgst": false, 00:19:29.753 "ddgst": false 00:19:29.753 }, 00:19:29.753 "method": "bdev_nvme_attach_controller" 00:19:29.753 }' 00:19:29.753 [2024-11-20 17:02:21.848463] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:19:29.753 [2024-11-20 17:02:21.848538] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1969500 ] 00:19:30.013 [2024-11-20 17:02:21.946939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:30.013 [2024-11-20 17:02:22.007461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.013 [2024-11-20 17:02:22.007686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.013 [2024-11-20 17:02:22.007688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.273 I/O targets: 00:19:30.273 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:30.273 00:19:30.273 00:19:30.273 CUnit - A unit testing framework for C - Version 2.1-3 00:19:30.273 http://cunit.sourceforge.net/ 00:19:30.273 00:19:30.273 00:19:30.273 Suite: bdevio tests on: Nvme1n1 00:19:30.273 Test: blockdev write read block ...passed 00:19:30.273 Test: blockdev write zeroes read block ...passed 00:19:30.273 Test: blockdev write zeroes read no split ...passed 00:19:30.273 Test: blockdev write zeroes read split ...passed 00:19:30.534 Test: blockdev write zeroes read split partial ...passed 00:19:30.534 Test: blockdev reset ...[2024-11-20 17:02:22.452751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:30.534 [2024-11-20 17:02:22.452848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8800 (9): Bad file descriptor 00:19:30.534 [2024-11-20 17:02:22.509455] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:30.534 passed 00:19:30.534 Test: blockdev write read 8 blocks ...passed 00:19:30.534 Test: blockdev write read size > 128k ...passed 00:19:30.534 Test: blockdev write read invalid size ...passed 00:19:30.534 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:30.534 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:30.534 Test: blockdev write read max offset ...passed 00:19:30.534 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:30.796 Test: blockdev writev readv 8 blocks ...passed 00:19:30.796 Test: blockdev writev readv 30 x 1block ...passed 00:19:30.796 Test: blockdev writev readv block ...passed 00:19:30.796 Test: blockdev writev readv size > 128k ...passed 00:19:30.796 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:30.796 Test: blockdev comparev and writev ...[2024-11-20 17:02:22.776919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.776979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.776996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.777005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.777561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.777573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.777588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.777596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.778121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.778132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.778146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.778752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.778765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.778779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.796 [2024-11-20 17:02:22.778786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:30.796 passed 00:19:30.796 Test: blockdev nvme passthru rw ...passed 00:19:30.796 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:02:22.862823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.796 [2024-11-20 17:02:22.862842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.863197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.796 [2024-11-20 17:02:22.863208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.863598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.796 [2024-11-20 17:02:22.863609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:30.796 [2024-11-20 17:02:22.863998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.796 [2024-11-20 17:02:22.864010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:30.796 passed 00:19:30.796 Test: blockdev nvme admin passthru ...passed 00:19:30.796 Test: blockdev copy ...passed 00:19:30.796 00:19:30.796 Run Summary: Type Total Ran Passed Failed Inactive 00:19:30.796 suites 1 1 n/a 0 0 00:19:30.796 tests 23 23 23 0 0 00:19:30.796 asserts 152 152 152 0 n/a 00:19:30.796 00:19:30.796 Elapsed time = 1.240 seconds 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.368 rmmod nvme_tcp 00:19:31.368 rmmod nvme_fabrics 00:19:31.368 rmmod nvme_keyring 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1969170 ']' 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1969170 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1969170 ']' 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1969170 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969170 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969170' 00:19:31.368 killing process with pid 1969170 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1969170 00:19:31.368 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1969170 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.628 17:02:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.173 00:19:34.173 real 0m12.710s 00:19:34.173 user 0m15.042s 00:19:34.173 sys 0m6.814s 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:34.173 ************************************ 00:19:34.173 END TEST nvmf_bdevio_no_huge 00:19:34.173 ************************************ 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.173 ************************************ 00:19:34.173 START TEST nvmf_tls 00:19:34.173 ************************************ 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:34.173 * Looking for test storage... 00:19:34.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:34.173 17:02:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:34.173 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:34.173 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.173 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.173 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.173 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.174 --rc genhtml_branch_coverage=1 00:19:34.174 --rc genhtml_function_coverage=1 00:19:34.174 --rc genhtml_legend=1 00:19:34.174 --rc geninfo_all_blocks=1 00:19:34.174 --rc geninfo_unexecuted_blocks=1 00:19:34.174 00:19:34.174 ' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.174 --rc genhtml_branch_coverage=1 00:19:34.174 --rc genhtml_function_coverage=1 00:19:34.174 --rc genhtml_legend=1 00:19:34.174 --rc geninfo_all_blocks=1 00:19:34.174 --rc geninfo_unexecuted_blocks=1 00:19:34.174 00:19:34.174 ' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.174 --rc genhtml_branch_coverage=1 00:19:34.174 --rc genhtml_function_coverage=1 00:19:34.174 --rc genhtml_legend=1 00:19:34.174 --rc geninfo_all_blocks=1 00:19:34.174 --rc geninfo_unexecuted_blocks=1 00:19:34.174 00:19:34.174 ' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.174 --rc genhtml_branch_coverage=1 00:19:34.174 --rc genhtml_function_coverage=1 00:19:34.174 --rc genhtml_legend=1 00:19:34.174 --rc geninfo_all_blocks=1 00:19:34.174 --rc geninfo_unexecuted_blocks=1 00:19:34.174 00:19:34.174 ' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.174 17:02:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:42.471 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:42.471 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.471 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:42.472 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:42.472 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:19:42.472 00:19:42.472 --- 10.0.0.2 ping statistics --- 00:19:42.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.472 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:19:42.472 00:19:42.472 --- 10.0.0.1 ping statistics --- 00:19:42.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.472 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1973920 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1973920 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1973920 ']' 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.472 17:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 [2024-11-20 17:02:33.645821] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:19:42.472 [2024-11-20 17:02:33.645890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.472 [2024-11-20 17:02:33.746217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.472 [2024-11-20 17:02:33.797025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.472 [2024-11-20 17:02:33.797076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.472 [2024-11-20 17:02:33.797086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.472 [2024-11-20 17:02:33.797093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.472 [2024-11-20 17:02:33.797099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.472 [2024-11-20 17:02:33.797888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:42.472 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:42.734 true 00:19:42.734 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:42.734 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:42.734 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:42.734 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:42.734 17:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:42.995 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:42.995 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:43.255 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:43.255 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:43.255 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:43.255 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:43.255 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:43.517 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:43.517 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:43.517 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:43.517 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:43.777 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:43.777 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:43.777 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:44.037 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:44.037 17:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:44.037 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:44.037 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:44.037 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:44.297 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:44.297 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yOKupUGIDA 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lhPVUQi0tq 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yOKupUGIDA 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lhPVUQi0tq 00:19:44.557 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:44.817 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:44.817 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yOKupUGIDA 00:19:44.817 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yOKupUGIDA 00:19:44.817 17:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.078 [2024-11-20 17:02:37.144756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.078 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.339 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.339 [2024-11-20 17:02:37.481575] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.339 [2024-11-20 17:02:37.481780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.339 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.600 malloc0 00:19:45.600 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.860 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yOKupUGIDA 00:19:45.860 17:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.120 17:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yOKupUGIDA 00:19:56.115 Initializing NVMe Controllers 00:19:56.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:56.115 Initialization complete. Launching workers. 00:19:56.115 ======================================================== 00:19:56.115 Latency(us) 00:19:56.115 Device Information : IOPS MiB/s Average min max 00:19:56.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18828.88 73.55 3399.21 1122.58 4074.70 00:19:56.115 ======================================================== 00:19:56.115 Total : 18828.88 73.55 3399.21 1122.58 4074.70 00:19:56.115 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOKupUGIDA 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yOKupUGIDA 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1976906 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1976906 /var/tmp/bdevperf.sock 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1976906 ']' 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.115 17:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.376 [2024-11-20 17:02:48.315488] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:19:56.377 [2024-11-20 17:02:48.315548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976906 ] 00:19:56.377 [2024-11-20 17:02:48.401941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.377 [2024-11-20 17:02:48.436970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.947 17:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.947 17:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.947 17:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOKupUGIDA 00:19:57.210 17:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.472 [2024-11-20 17:02:49.417585] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.472 TLSTESTn1 00:19:57.472 17:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:57.472 Running I/O for 10 seconds... 00:19:59.428 5013.00 IOPS, 19.58 MiB/s [2024-11-20T16:02:52.988Z] 4871.50 IOPS, 19.03 MiB/s [2024-11-20T16:02:53.930Z] 5020.33 IOPS, 19.61 MiB/s [2024-11-20T16:02:54.872Z] 4997.75 IOPS, 19.52 MiB/s [2024-11-20T16:02:55.813Z] 5129.20 IOPS, 20.04 MiB/s [2024-11-20T16:02:56.755Z] 5258.67 IOPS, 20.54 MiB/s [2024-11-20T16:02:57.698Z] 5430.43 IOPS, 21.21 MiB/s [2024-11-20T16:02:58.640Z] 5430.12 IOPS, 21.21 MiB/s [2024-11-20T16:03:00.027Z] 5457.44 IOPS, 21.32 MiB/s [2024-11-20T16:03:00.027Z] 5442.80 IOPS, 21.26 MiB/s 00:20:07.851 Latency(us) 00:20:07.851 [2024-11-20T16:03:00.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.851 Verification LBA range: start 0x0 length 0x2000 00:20:07.851 TLSTESTn1 : 10.01 5448.32 21.28 0.00 0.00 23458.77 4942.51 45219.84 00:20:07.851 [2024-11-20T16:03:00.027Z] =================================================================================================================== 00:20:07.851 [2024-11-20T16:03:00.027Z] Total : 5448.32 21.28 0.00 0.00 23458.77 4942.51 45219.84 00:20:07.851 { 00:20:07.851 "results": [ 00:20:07.851 { 00:20:07.851 "job": "TLSTESTn1", 00:20:07.851 "core_mask": "0x4", 00:20:07.851 "workload": "verify", 00:20:07.851 "status": "finished", 00:20:07.851 "verify_range": { 00:20:07.851 "start": 0, 00:20:07.851 "length": 8192 00:20:07.851 }, 00:20:07.851 "queue_depth": 128, 00:20:07.851 "io_size": 4096, 00:20:07.851 "runtime": 10.012987, 00:20:07.851 "iops": 5448.324261281873, 00:20:07.851 "mibps": 21.282516645632317, 00:20:07.851 "io_failed": 0, 00:20:07.851 "io_timeout": 0, 00:20:07.851 "avg_latency_us": 23458.767209492733, 00:20:07.851 "min_latency_us": 4942.506666666667, 00:20:07.851 "max_latency_us": 45219.84 00:20:07.851 } 00:20:07.851 ], 00:20:07.851 "core_count": 1 00:20:07.851 } 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1976906 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1976906 ']' 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1976906 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976906 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976906' 00:20:07.851 killing process with pid 1976906 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1976906 00:20:07.851 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.851 00:20:07.851 Latency(us) 00:20:07.851 [2024-11-20T16:03:00.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.851 [2024-11-20T16:03:00.027Z] =================================================================================================================== 00:20:07.851 [2024-11-20T16:03:00.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1976906 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhPVUQi0tq 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhPVUQi0tq 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhPVUQi0tq 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lhPVUQi0tq 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1979047 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1979047 /var/tmp/bdevperf.sock 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1979047 ']' 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.851 17:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.851 [2024-11-20 17:02:59.869727] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:07.851 [2024-11-20 17:02:59.869784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979047 ] 00:20:07.851 [2024-11-20 17:02:59.954657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.851 [2024-11-20 17:02:59.983602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.793 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.793 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.793 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lhPVUQi0tq 00:20:08.793 17:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.054 [2024-11-20 17:03:00.981243] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.054 [2024-11-20 17:03:00.987275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:09.054 [2024-11-20 17:03:00.988380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185fbb0 (107): Transport endpoint is not connected 00:20:09.054 [2024-11-20 17:03:00.989376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185fbb0 (9): Bad file descriptor 00:20:09.054 [2024-11-20 17:03:00.990378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:09.054 [2024-11-20 17:03:00.990385] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:09.054 [2024-11-20 17:03:00.990391] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:09.054 [2024-11-20 17:03:00.990399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:09.054 request: 00:20:09.054 { 00:20:09.054 "name": "TLSTEST", 00:20:09.054 "trtype": "tcp", 00:20:09.054 "traddr": "10.0.0.2", 00:20:09.054 "adrfam": "ipv4", 00:20:09.054 "trsvcid": "4420", 00:20:09.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.054 "prchk_reftag": false, 00:20:09.054 "prchk_guard": false, 00:20:09.054 "hdgst": false, 00:20:09.054 "ddgst": false, 00:20:09.054 "psk": "key0", 00:20:09.054 "allow_unrecognized_csi": false, 00:20:09.055 "method": "bdev_nvme_attach_controller", 00:20:09.055 "req_id": 1 00:20:09.055 } 00:20:09.055 Got JSON-RPC error response 00:20:09.055 response: 00:20:09.055 { 00:20:09.055 "code": -5, 00:20:09.055 "message": "Input/output error" 00:20:09.055 } 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1979047 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1979047 ']' 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1979047 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979047 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979047' 00:20:09.055 killing process with pid 1979047 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1979047 00:20:09.055 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.055 00:20:09.055 Latency(us) 00:20:09.055 [2024-11-20T16:03:01.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.055 [2024-11-20T16:03:01.231Z] =================================================================================================================== 00:20:09.055 [2024-11-20T16:03:01.231Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1979047 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yOKupUGIDA 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yOKupUGIDA 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yOKupUGIDA 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yOKupUGIDA 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1979341 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1979341 /var/tmp/bdevperf.sock 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1979341 ']' 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.055 17:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.055 [2024-11-20 17:03:01.221996] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:09.055 [2024-11-20 17:03:01.222055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979341 ] 00:20:09.316 [2024-11-20 17:03:01.306405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.316 [2024-11-20 17:03:01.335320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.889 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.889 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.889 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOKupUGIDA 00:20:10.149 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:10.410 [2024-11-20 17:03:02.347304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.410 [2024-11-20 17:03:02.351871] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:10.410 [2024-11-20 17:03:02.351891] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:10.410 [2024-11-20 17:03:02.351910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:10.410 [2024-11-20 17:03:02.352559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9bb0 (107): Transport endpoint is not connected 00:20:10.410 [2024-11-20 17:03:02.353554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9bb0 (9): Bad file descriptor 00:20:10.410 [2024-11-20 17:03:02.354556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:10.410 [2024-11-20 17:03:02.354564] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:10.410 [2024-11-20 17:03:02.354570] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:10.410 [2024-11-20 17:03:02.354577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:10.410 request: 00:20:10.410 { 00:20:10.410 "name": "TLSTEST", 00:20:10.410 "trtype": "tcp", 00:20:10.410 "traddr": "10.0.0.2", 00:20:10.410 "adrfam": "ipv4", 00:20:10.410 "trsvcid": "4420", 00:20:10.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.410 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:10.410 "prchk_reftag": false, 00:20:10.410 "prchk_guard": false, 00:20:10.410 "hdgst": false, 00:20:10.410 "ddgst": false, 00:20:10.410 "psk": "key0", 00:20:10.410 "allow_unrecognized_csi": false, 00:20:10.410 "method": "bdev_nvme_attach_controller", 00:20:10.410 "req_id": 1 00:20:10.410 } 00:20:10.410 Got JSON-RPC error response 00:20:10.410 response: 00:20:10.410 { 00:20:10.410 "code": -5, 00:20:10.410 "message": "Input/output error" 00:20:10.410 } 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1979341 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1979341 ']' 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1979341 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979341 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979341' 00:20:10.410 killing process with pid 1979341 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1979341 00:20:10.410 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.410 00:20:10.410 Latency(us) 00:20:10.410 [2024-11-20T16:03:02.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.410 [2024-11-20T16:03:02.586Z] =================================================================================================================== 00:20:10.410 [2024-11-20T16:03:02.586Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1979341 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOKupUGIDA 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOKupUGIDA 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOKupUGIDA 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yOKupUGIDA 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1979725 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1979725 /var/tmp/bdevperf.sock 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1979725 ']' 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.410 17:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.671 [2024-11-20 17:03:02.583954] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:10.671 [2024-11-20 17:03:02.584010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979725 ] 00:20:10.671 [2024-11-20 17:03:02.665394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.671 [2024-11-20 17:03:02.693656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.243 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.243 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.243 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOKupUGIDA 00:20:11.503 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.765 [2024-11-20 17:03:03.713374] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.765 [2024-11-20 17:03:03.723841] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:11.765 [2024-11-20 17:03:03.723859] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:11.765 [2024-11-20 17:03:03.723878] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:11.765 [2024-11-20 17:03:03.724514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121dbb0 (107): Transport endpoint is not connected 00:20:11.765 [2024-11-20 17:03:03.725510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121dbb0 (9): Bad file descriptor 00:20:11.765 [2024-11-20 17:03:03.726512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:11.765 [2024-11-20 17:03:03.726519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:11.765 [2024-11-20 17:03:03.726525] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:11.765 [2024-11-20 17:03:03.726533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:11.765 request: 00:20:11.765 { 00:20:11.765 "name": "TLSTEST", 00:20:11.765 "trtype": "tcp", 00:20:11.765 "traddr": "10.0.0.2", 00:20:11.765 "adrfam": "ipv4", 00:20:11.765 "trsvcid": "4420", 00:20:11.765 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.765 "prchk_reftag": false, 00:20:11.765 "prchk_guard": false, 00:20:11.765 "hdgst": false, 00:20:11.765 "ddgst": false, 00:20:11.765 "psk": "key0", 00:20:11.765 "allow_unrecognized_csi": false, 00:20:11.765 "method": "bdev_nvme_attach_controller", 00:20:11.765 "req_id": 1 00:20:11.765 } 00:20:11.765 Got JSON-RPC error response 00:20:11.765 response: 00:20:11.765 { 00:20:11.765 "code": -5, 00:20:11.765 "message": "Input/output error" 00:20:11.765 } 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1979725 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1979725 ']' 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1979725 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979725 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979725' 00:20:11.765 killing process with pid 1979725 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1979725 00:20:11.765 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.765 00:20:11.765 Latency(us) 00:20:11.765 [2024-11-20T16:03:03.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.765 [2024-11-20T16:03:03.941Z] =================================================================================================================== 00:20:11.765 [2024-11-20T16:03:03.941Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1979725 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.765 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1980068 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1980068 /var/tmp/bdevperf.sock 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1980068 ']' 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.766 17:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 [2024-11-20 17:03:03.972302] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:12.027 [2024-11-20 17:03:03.972356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980068 ] 00:20:12.027 [2024-11-20 17:03:04.054284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.027 [2024-11-20 17:03:04.082317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.968 17:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.968 17:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.968 17:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:12.968 [2024-11-20 17:03:04.925380] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:12.969 [2024-11-20 17:03:04.925403] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:12.969 request: 00:20:12.969 { 00:20:12.969 "name": "key0", 00:20:12.969 "path": "", 00:20:12.969 "method": "keyring_file_add_key", 00:20:12.969 "req_id": 1 00:20:12.969 } 00:20:12.969 Got JSON-RPC error response 00:20:12.969 response: 00:20:12.969 { 00:20:12.969 "code": -1, 00:20:12.969 "message": "Operation not permitted" 00:20:12.969 } 00:20:12.969 17:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.969 [2024-11-20 17:03:05.093891] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.969 [2024-11-20 17:03:05.093917] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:12.969 request: 00:20:12.969 { 00:20:12.969 "name": "TLSTEST", 00:20:12.969 "trtype": "tcp", 00:20:12.969 "traddr": "10.0.0.2", 00:20:12.969 "adrfam": "ipv4", 00:20:12.969 "trsvcid": "4420", 00:20:12.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.969 "prchk_reftag": false, 00:20:12.969 "prchk_guard": false, 00:20:12.969 "hdgst": false, 00:20:12.969 "ddgst": false, 00:20:12.969 "psk": "key0", 00:20:12.969 "allow_unrecognized_csi": false, 00:20:12.969 "method": "bdev_nvme_attach_controller", 00:20:12.969 "req_id": 1 00:20:12.969 } 00:20:12.969 Got JSON-RPC error response 00:20:12.969 response: 00:20:12.969 { 00:20:12.969 "code": -126, 00:20:12.969 "message": "Required key not available" 00:20:12.969 } 00:20:12.969 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1980068 00:20:12.969 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1980068 ']' 00:20:12.969 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1980068 00:20:12.969 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.969 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.969 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1980068 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1980068' 00:20:13.229 killing process with pid 1980068 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1980068 00:20:13.229 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.229 00:20:13.229 Latency(us) 00:20:13.229 [2024-11-20T16:03:05.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.229 [2024-11-20T16:03:05.405Z] =================================================================================================================== 00:20:13.229 [2024-11-20T16:03:05.405Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1980068 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1973920 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1973920 ']' 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1973920 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973920 00:20:13.229 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.230 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.230 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973920' 00:20:13.230 killing process with pid 1973920 00:20:13.230 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1973920 00:20:13.230 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1973920 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.VjbBbOvcST 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.VjbBbOvcST 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1980419 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1980419 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1980419 ']' 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.491 17:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.491 [2024-11-20 17:03:05.580024] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:13.491 [2024-11-20 17:03:05.580088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.751 [2024-11-20 17:03:05.669157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.751 [2024-11-20 17:03:05.699025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.751 [2024-11-20 17:03:05.699053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.751 [2024-11-20 17:03:05.699058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.751 [2024-11-20 17:03:05.699063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.751 [2024-11-20 17:03:05.699067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.751 [2024-11-20 17:03:05.699518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.VjbBbOvcST 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VjbBbOvcST 00:20:14.322 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.582 [2024-11-20 17:03:06.560371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.582 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.582 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.842 [2024-11-20 17:03:06.885175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.842 [2024-11-20 17:03:06.885382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.842 17:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.103 malloc0 00:20:15.103 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.103 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:15.362 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VjbBbOvcST 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VjbBbOvcST 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1980784 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1980784 /var/tmp/bdevperf.sock 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1980784 ']' 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.624 17:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.624 [2024-11-20 17:03:07.606779] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:15.624 [2024-11-20 17:03:07.606834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980784 ] 00:20:15.624 [2024-11-20 17:03:07.689171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.624 [2024-11-20 17:03:07.718169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.565 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.565 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.565 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:16.565 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.565 [2024-11-20 17:03:08.725864] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.826 TLSTESTn1 00:20:16.826 17:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.826 Running I/O for 10 seconds... 00:20:19.153 5721.00 IOPS, 22.35 MiB/s [2024-11-20T16:03:12.271Z] 6033.00 IOPS, 23.57 MiB/s [2024-11-20T16:03:13.211Z] 6212.00 IOPS, 24.27 MiB/s [2024-11-20T16:03:14.152Z] 6189.75 IOPS, 24.18 MiB/s [2024-11-20T16:03:15.094Z] 6181.40 IOPS, 24.15 MiB/s [2024-11-20T16:03:16.037Z] 6092.17 IOPS, 23.80 MiB/s [2024-11-20T16:03:16.979Z] 6149.86 IOPS, 24.02 MiB/s [2024-11-20T16:03:18.363Z] 6160.62 IOPS, 24.06 MiB/s [2024-11-20T16:03:19.302Z] 6092.00 IOPS, 23.80 MiB/s [2024-11-20T16:03:19.302Z] 6107.80 IOPS, 23.86 MiB/s 00:20:27.126 Latency(us) 00:20:27.126 [2024-11-20T16:03:19.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.126 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:27.126 Verification LBA range: start 0x0 length 0x2000 00:20:27.127 TLSTESTn1 : 10.01 6113.42 23.88 0.00 0.00 20908.37 4587.52 23920.64 00:20:27.127 [2024-11-20T16:03:19.303Z] =================================================================================================================== 00:20:27.127 [2024-11-20T16:03:19.303Z] Total : 6113.42 23.88 0.00 0.00 20908.37 4587.52 23920.64 00:20:27.127 { 00:20:27.127 "results": [ 00:20:27.127 { 00:20:27.127 "job": "TLSTESTn1", 00:20:27.127 "core_mask": "0x4", 00:20:27.127 "workload": "verify", 00:20:27.127 "status": "finished", 00:20:27.127 "verify_range": { 00:20:27.127 "start": 0, 00:20:27.127 "length": 8192 00:20:27.127 }, 00:20:27.127 "queue_depth": 128, 00:20:27.127 "io_size": 4096, 00:20:27.127 "runtime": 10.011254, 00:20:27.127 "iops": 6113.419957180189, 00:20:27.127 "mibps": 23.880546707735114, 00:20:27.127 "io_failed": 0, 00:20:27.127 "io_timeout": 0, 00:20:27.127 "avg_latency_us": 20908.37397469623, 00:20:27.127 "min_latency_us": 4587.52, 00:20:27.127 "max_latency_us": 23920.64 00:20:27.127 } 00:20:27.127 ], 00:20:27.127 "core_count": 1 00:20:27.127 } 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1980784 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1980784 ']' 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1980784 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.127 17:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1980784 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1980784' 00:20:27.127 killing process with pid 1980784 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1980784 00:20:27.127 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.127 00:20:27.127 Latency(us) 00:20:27.127 [2024-11-20T16:03:19.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.127 [2024-11-20T16:03:19.303Z] =================================================================================================================== 00:20:27.127 [2024-11-20T16:03:19.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1980784 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.VjbBbOvcST 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VjbBbOvcST 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VjbBbOvcST 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VjbBbOvcST 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VjbBbOvcST 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1983538 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1983538 /var/tmp/bdevperf.sock 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1983538 ']' 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.127 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.127 [2024-11-20 17:03:19.202847] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:27.127 [2024-11-20 17:03:19.202905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983538 ] 00:20:27.127 [2024-11-20 17:03:19.284987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.386 [2024-11-20 17:03:19.313813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.954 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.954 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.954 17:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:28.215 [2024-11-20 17:03:20.128935] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VjbBbOvcST': 0100666 00:20:28.215 [2024-11-20 17:03:20.128958] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:28.215 request: 00:20:28.215 { 00:20:28.215 "name": "key0", 00:20:28.215 "path": "/tmp/tmp.VjbBbOvcST", 00:20:28.215 "method": "keyring_file_add_key", 00:20:28.215 "req_id": 1 00:20:28.215 } 00:20:28.215 Got JSON-RPC error response 00:20:28.215 response: 00:20:28.215 { 00:20:28.215 "code": -1, 00:20:28.215 "message": "Operation not permitted" 00:20:28.215 } 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.215 [2024-11-20 17:03:20.297436] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.215 [2024-11-20 17:03:20.297464] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:28.215 request: 00:20:28.215 { 00:20:28.215 "name": "TLSTEST", 00:20:28.215 "trtype": "tcp", 00:20:28.215 "traddr": "10.0.0.2", 00:20:28.215 "adrfam": "ipv4", 00:20:28.215 "trsvcid": "4420", 00:20:28.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.215 "prchk_reftag": false, 00:20:28.215 "prchk_guard": false, 00:20:28.215 "hdgst": false, 00:20:28.215 "ddgst": false, 00:20:28.215 "psk": "key0", 00:20:28.215 "allow_unrecognized_csi": false, 00:20:28.215 "method": "bdev_nvme_attach_controller", 00:20:28.215 "req_id": 1 00:20:28.215 } 00:20:28.215 Got JSON-RPC error response 00:20:28.215 response: 00:20:28.215 { 00:20:28.215 "code": -126, 00:20:28.215 "message": "Required key not available" 00:20:28.215 } 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1983538 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1983538 ']' 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1983538 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983538 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983538' 00:20:28.215 killing process with pid 1983538 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1983538 00:20:28.215 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.215 00:20:28.215 Latency(us) 00:20:28.215 [2024-11-20T16:03:20.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.215 [2024-11-20T16:03:20.391Z] =================================================================================================================== 00:20:28.215 [2024-11-20T16:03:20.391Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.215 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1983538 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1980419 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1980419 ']' 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1980419 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1980419 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1980419' 00:20:28.474 killing process with pid 1980419 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1980419 00:20:28.474 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1980419 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1983789 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1983789 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1983789 ']' 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.736 17:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.736 [2024-11-20 17:03:20.712427] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:28.736 [2024-11-20 17:03:20.712485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.736 [2024-11-20 17:03:20.803509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.736 [2024-11-20 17:03:20.833226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.736 [2024-11-20 17:03:20.833265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.736 [2024-11-20 17:03:20.833271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.736 [2024-11-20 17:03:20.833277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.736 [2024-11-20 17:03:20.833281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.736 [2024-11-20 17:03:20.833753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.VjbBbOvcST 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VjbBbOvcST 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.VjbBbOvcST 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VjbBbOvcST 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.716 [2024-11-20 17:03:21.694080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.716 17:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:30.008 [2024-11-20 17:03:22.010856] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.008 [2024-11-20 17:03:22.011059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.008 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:30.008 malloc0 00:20:30.008 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:30.268 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:30.527 [2024-11-20 17:03:22.501829] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VjbBbOvcST': 0100666 00:20:30.527 [2024-11-20 17:03:22.501852] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:30.527 request: 00:20:30.527 { 00:20:30.527 "name": "key0", 00:20:30.527 "path": "/tmp/tmp.VjbBbOvcST", 00:20:30.527 "method": "keyring_file_add_key", 00:20:30.527 "req_id": 1 00:20:30.527 } 00:20:30.527 Got JSON-RPC error response 00:20:30.527 response: 00:20:30.527 { 00:20:30.527 "code": -1, 00:20:30.527 "message": "Operation not permitted" 00:20:30.527 } 00:20:30.527 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:30.527 [2024-11-20 17:03:22.670273] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:30.527 [2024-11-20 17:03:22.670301] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:30.527 request: 00:20:30.527 { 00:20:30.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.527 "host": "nqn.2016-06.io.spdk:host1", 00:20:30.527 "psk": "key0", 00:20:30.527 "method": "nvmf_subsystem_add_host", 00:20:30.527 "req_id": 1 00:20:30.527 } 00:20:30.527 Got JSON-RPC error response 00:20:30.527 response: 00:20:30.527 { 00:20:30.527 "code": -32603, 00:20:30.527 "message": "Internal error" 00:20:30.527 } 00:20:30.527 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:30.527 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.527 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.527 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.528 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1983789 00:20:30.528 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1983789 ']' 00:20:30.528 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1983789 00:20:30.528 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.528 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.528 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983789 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983789' 00:20:30.787 killing process with pid 1983789 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1983789 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1983789 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.VjbBbOvcST 00:20:30.787 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1984315 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1984315 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1984315 ']' 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.788 17:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.788 [2024-11-20 17:03:22.930450] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:30.788 [2024-11-20 17:03:22.930520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.047 [2024-11-20 17:03:23.019252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.047 [2024-11-20 17:03:23.048058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.047 [2024-11-20 17:03:23.048083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.047 [2024-11-20 17:03:23.048089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.047 [2024-11-20 17:03:23.048093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.047 [2024-11-20 17:03:23.048097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.047 [2024-11-20 17:03:23.048557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.VjbBbOvcST 00:20:31.616 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VjbBbOvcST 00:20:31.617 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.877 [2024-11-20 17:03:23.904573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.878 17:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.139 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:32.139 [2024-11-20 17:03:24.225357] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.139 [2024-11-20 17:03:24.225562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.139 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:32.401 malloc0 00:20:32.401 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.662 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:32.662 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1984681 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1984681 /var/tmp/bdevperf.sock 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1984681 ']' 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.924 17:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.924 [2024-11-20 17:03:24.959611] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:32.924 [2024-11-20 17:03:24.959664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984681 ] 00:20:32.924 [2024-11-20 17:03:25.041079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.924 [2024-11-20 17:03:25.070095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.866 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.866 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:33.866 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:33.866 17:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.127 [2024-11-20 17:03:26.053574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.127 TLSTESTn1 00:20:34.127 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:34.387 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:34.387 "subsystems": [ 00:20:34.387 { 00:20:34.387 "subsystem": "keyring", 00:20:34.387 "config": [ 00:20:34.387 { 00:20:34.387 "method": "keyring_file_add_key", 00:20:34.387 "params": { 00:20:34.387 "name": "key0", 00:20:34.387 "path": "/tmp/tmp.VjbBbOvcST" 00:20:34.387 } 00:20:34.387 } 00:20:34.388 ] 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "subsystem": "iobuf", 00:20:34.388 "config": [ 00:20:34.388 { 00:20:34.388 "method": "iobuf_set_options", 00:20:34.388 "params": { 00:20:34.388 "small_pool_count": 8192, 00:20:34.388 "large_pool_count": 1024, 00:20:34.388 "small_bufsize": 8192, 00:20:34.388 "large_bufsize": 135168, 00:20:34.388 "enable_numa": false 00:20:34.388 } 00:20:34.388 } 00:20:34.388 ] 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "subsystem": "sock", 00:20:34.388 "config": [ 00:20:34.388 { 00:20:34.388 "method": "sock_set_default_impl", 00:20:34.388 "params": { 00:20:34.388 "impl_name": "posix" 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "sock_impl_set_options", 00:20:34.388 "params": { 00:20:34.388 "impl_name": "ssl", 00:20:34.388 "recv_buf_size": 4096, 00:20:34.388 "send_buf_size": 4096, 00:20:34.388 "enable_recv_pipe": true, 00:20:34.388 "enable_quickack": false, 00:20:34.388 "enable_placement_id": 0, 00:20:34.388 "enable_zerocopy_send_server": true, 00:20:34.388 "enable_zerocopy_send_client": false, 00:20:34.388 "zerocopy_threshold": 0, 00:20:34.388 "tls_version": 0, 00:20:34.388 "enable_ktls": false 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "sock_impl_set_options", 00:20:34.388 "params": { 00:20:34.388 "impl_name": "posix", 00:20:34.388 "recv_buf_size": 2097152, 00:20:34.388 "send_buf_size": 2097152, 00:20:34.388 "enable_recv_pipe": true, 00:20:34.388 "enable_quickack": false, 00:20:34.388 "enable_placement_id": 0, 00:20:34.388 "enable_zerocopy_send_server": true, 00:20:34.388 "enable_zerocopy_send_client": false, 00:20:34.388 "zerocopy_threshold": 0, 00:20:34.388 "tls_version": 0, 00:20:34.388 "enable_ktls": false 00:20:34.388 } 00:20:34.388 } 00:20:34.388 ] 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "subsystem": "vmd", 00:20:34.388 "config": [] 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "subsystem": "accel", 00:20:34.388 "config": [ 00:20:34.388 { 00:20:34.388 "method": "accel_set_options", 00:20:34.388 "params": { 00:20:34.388 "small_cache_size": 128, 00:20:34.388 "large_cache_size": 16, 00:20:34.388 "task_count": 2048, 00:20:34.388 "sequence_count": 2048, 00:20:34.388 "buf_count": 2048 00:20:34.388 } 00:20:34.388 } 00:20:34.388 ] 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "subsystem": "bdev", 00:20:34.388 "config": [ 00:20:34.388 { 00:20:34.388 "method": "bdev_set_options", 00:20:34.388 "params": { 00:20:34.388 "bdev_io_pool_size": 65535, 00:20:34.388 "bdev_io_cache_size": 256, 00:20:34.388 "bdev_auto_examine": true, 00:20:34.388 "iobuf_small_cache_size": 128, 00:20:34.388 "iobuf_large_cache_size": 16 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "bdev_raid_set_options", 00:20:34.388 "params": { 00:20:34.388 "process_window_size_kb": 1024, 00:20:34.388 "process_max_bandwidth_mb_sec": 0 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "bdev_iscsi_set_options", 00:20:34.388 "params": { 00:20:34.388 "timeout_sec": 30 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "bdev_nvme_set_options", 00:20:34.388 "params": { 00:20:34.388 "action_on_timeout": "none", 00:20:34.388 "timeout_us": 0, 00:20:34.388 "timeout_admin_us": 0, 00:20:34.388 "keep_alive_timeout_ms": 10000, 00:20:34.388 "arbitration_burst": 0, 00:20:34.388 "low_priority_weight": 0, 00:20:34.388 "medium_priority_weight": 0, 00:20:34.388 "high_priority_weight": 0, 00:20:34.388 "nvme_adminq_poll_period_us": 10000, 00:20:34.388 "nvme_ioq_poll_period_us": 0, 00:20:34.388 "io_queue_requests": 0, 00:20:34.388 "delay_cmd_submit": true, 00:20:34.388 "transport_retry_count": 4, 00:20:34.388 "bdev_retry_count": 3, 00:20:34.388 "transport_ack_timeout": 0, 00:20:34.388 "ctrlr_loss_timeout_sec": 0, 00:20:34.388 "reconnect_delay_sec": 0, 00:20:34.388 "fast_io_fail_timeout_sec": 0, 00:20:34.388 "disable_auto_failback": false, 00:20:34.388 "generate_uuids": false, 00:20:34.388 "transport_tos": 0, 00:20:34.388 "nvme_error_stat": false, 00:20:34.388 "rdma_srq_size": 0, 00:20:34.388 "io_path_stat": false, 00:20:34.388 "allow_accel_sequence": false, 00:20:34.388 "rdma_max_cq_size": 0, 00:20:34.388 "rdma_cm_event_timeout_ms": 0, 00:20:34.388 "dhchap_digests": [ 00:20:34.388 "sha256", 00:20:34.388 "sha384", 00:20:34.388 "sha512" 00:20:34.388 ], 00:20:34.388 "dhchap_dhgroups": [ 00:20:34.388 "null", 00:20:34.388 "ffdhe2048", 00:20:34.388 "ffdhe3072", 00:20:34.388 "ffdhe4096", 00:20:34.388 "ffdhe6144", 00:20:34.388 "ffdhe8192" 00:20:34.388 ] 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "bdev_nvme_set_hotplug", 00:20:34.388 "params": { 00:20:34.388 "period_us": 100000, 00:20:34.388 "enable": false 00:20:34.388 } 00:20:34.388 }, 00:20:34.388 { 00:20:34.388 "method": "bdev_malloc_create", 00:20:34.388 "params": { 00:20:34.388 "name": "malloc0", 00:20:34.388 "num_blocks": 8192, 00:20:34.388 "block_size": 4096, 00:20:34.388 "physical_block_size": 4096, 00:20:34.388 "uuid": "fa4e8b7d-5099-43e5-a5e3-e3b303eb1dc5", 00:20:34.388 "optimal_io_boundary": 0, 00:20:34.388 "md_size": 0, 00:20:34.388 "dif_type": 0, 00:20:34.389 "dif_is_head_of_md": false, 00:20:34.389 "dif_pi_format": 0 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "bdev_wait_for_examine" 00:20:34.389 } 00:20:34.389 ] 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "subsystem": "nbd", 00:20:34.389 "config": [] 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "subsystem": "scheduler", 00:20:34.389 "config": [ 00:20:34.389 { 00:20:34.389 "method": "framework_set_scheduler", 00:20:34.389 "params": { 00:20:34.389 "name": "static" 00:20:34.389 } 00:20:34.389 } 00:20:34.389 ] 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "subsystem": "nvmf", 00:20:34.389 "config": [ 00:20:34.389 { 00:20:34.389 "method": "nvmf_set_config", 00:20:34.389 "params": { 00:20:34.389 "discovery_filter": "match_any", 00:20:34.389 "admin_cmd_passthru": { 00:20:34.389 "identify_ctrlr": false 00:20:34.389 }, 00:20:34.389 "dhchap_digests": [ 00:20:34.389 "sha256", 00:20:34.389 "sha384", 00:20:34.389 "sha512" 00:20:34.389 ], 00:20:34.389 "dhchap_dhgroups": [ 00:20:34.389 "null", 00:20:34.389 "ffdhe2048", 00:20:34.389 "ffdhe3072", 00:20:34.389 "ffdhe4096", 00:20:34.389 "ffdhe6144", 00:20:34.389 "ffdhe8192" 00:20:34.389 ] 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_set_max_subsystems", 00:20:34.389 "params": { 00:20:34.389 "max_subsystems": 1024 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_set_crdt", 00:20:34.389 "params": { 00:20:34.389 "crdt1": 0, 00:20:34.389 "crdt2": 0, 00:20:34.389 "crdt3": 0 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_create_transport", 00:20:34.389 "params": { 00:20:34.389 "trtype": "TCP", 00:20:34.389 "max_queue_depth": 128, 00:20:34.389 "max_io_qpairs_per_ctrlr": 127, 00:20:34.389 "in_capsule_data_size": 4096, 00:20:34.389 "max_io_size": 131072, 00:20:34.389 "io_unit_size": 131072, 00:20:34.389 "max_aq_depth": 128, 00:20:34.389 "num_shared_buffers": 511, 00:20:34.389 "buf_cache_size": 4294967295, 00:20:34.389 "dif_insert_or_strip": false, 00:20:34.389 "zcopy": false, 00:20:34.389 "c2h_success": false, 00:20:34.389 "sock_priority": 0, 00:20:34.389 "abort_timeout_sec": 1, 00:20:34.389 "ack_timeout": 0, 00:20:34.389 "data_wr_pool_size": 0 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_create_subsystem", 00:20:34.389 "params": { 00:20:34.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.389 "allow_any_host": false, 00:20:34.389 "serial_number": "SPDK00000000000001", 00:20:34.389 "model_number": "SPDK bdev Controller", 00:20:34.389 "max_namespaces": 10, 00:20:34.389 "min_cntlid": 1, 00:20:34.389 "max_cntlid": 65519, 00:20:34.389 "ana_reporting": false 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_subsystem_add_host", 00:20:34.389 "params": { 00:20:34.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.389 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.389 "psk": "key0" 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_subsystem_add_ns", 00:20:34.389 "params": { 00:20:34.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.389 "namespace": { 00:20:34.389 "nsid": 1, 00:20:34.389 "bdev_name": "malloc0", 00:20:34.389 "nguid": "FA4E8B7D509943E5A5E3E3B303EB1DC5", 00:20:34.389 "uuid": "fa4e8b7d-5099-43e5-a5e3-e3b303eb1dc5", 00:20:34.389 "no_auto_visible": false 00:20:34.389 } 00:20:34.389 } 00:20:34.389 }, 00:20:34.389 { 00:20:34.389 "method": "nvmf_subsystem_add_listener", 00:20:34.389 "params": { 00:20:34.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.389 "listen_address": { 00:20:34.389 "trtype": "TCP", 00:20:34.389 "adrfam": "IPv4", 00:20:34.389 "traddr": "10.0.0.2", 00:20:34.389 "trsvcid": "4420" 00:20:34.389 }, 00:20:34.389 "secure_channel": true 00:20:34.389 } 00:20:34.389 } 00:20:34.389 ] 00:20:34.389 } 00:20:34.389 ] 00:20:34.389 }' 00:20:34.389 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:34.650 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:34.650 "subsystems": [ 00:20:34.650 { 00:20:34.650 "subsystem": "keyring", 00:20:34.650 "config": [ 00:20:34.650 { 00:20:34.650 "method": "keyring_file_add_key", 00:20:34.650 "params": { 00:20:34.650 "name": "key0", 00:20:34.650 "path": "/tmp/tmp.VjbBbOvcST" 00:20:34.650 } 00:20:34.650 } 00:20:34.650 ] 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "subsystem": "iobuf", 00:20:34.650 "config": [ 00:20:34.650 { 00:20:34.650 "method": "iobuf_set_options", 00:20:34.650 "params": { 00:20:34.650 "small_pool_count": 8192, 00:20:34.650 "large_pool_count": 1024, 00:20:34.650 "small_bufsize": 8192, 00:20:34.650 "large_bufsize": 135168, 00:20:34.650 "enable_numa": false 00:20:34.650 } 00:20:34.650 } 00:20:34.650 ] 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "subsystem": "sock", 00:20:34.650 "config": [ 00:20:34.650 { 00:20:34.650 "method": "sock_set_default_impl", 00:20:34.650 "params": { 00:20:34.650 "impl_name": "posix" 00:20:34.650 } 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "method": "sock_impl_set_options", 00:20:34.650 "params": { 00:20:34.650 "impl_name": "ssl", 00:20:34.650 "recv_buf_size": 4096, 00:20:34.650 "send_buf_size": 4096, 00:20:34.650 "enable_recv_pipe": true, 00:20:34.650 "enable_quickack": false, 00:20:34.650 "enable_placement_id": 0, 00:20:34.650 "enable_zerocopy_send_server": true, 00:20:34.650 "enable_zerocopy_send_client": false, 00:20:34.650 "zerocopy_threshold": 0, 00:20:34.650 "tls_version": 0, 00:20:34.650 "enable_ktls": false 00:20:34.650 } 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "method": "sock_impl_set_options", 00:20:34.650 "params": { 00:20:34.650 "impl_name": "posix", 00:20:34.650 "recv_buf_size": 2097152, 00:20:34.650 "send_buf_size": 2097152, 00:20:34.650 "enable_recv_pipe": true, 00:20:34.650 "enable_quickack": false, 00:20:34.650 "enable_placement_id": 0, 00:20:34.650 "enable_zerocopy_send_server": true, 00:20:34.650 "enable_zerocopy_send_client": false, 00:20:34.650 "zerocopy_threshold": 0, 00:20:34.650 "tls_version": 0, 00:20:34.650 "enable_ktls": false 00:20:34.650 } 00:20:34.650 } 00:20:34.650 ] 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "subsystem": "vmd", 00:20:34.650 "config": [] 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "subsystem": "accel", 00:20:34.650 "config": [ 00:20:34.650 { 00:20:34.650 "method": "accel_set_options", 00:20:34.650 "params": { 00:20:34.650 "small_cache_size": 128, 00:20:34.650 "large_cache_size": 16, 00:20:34.650 "task_count": 2048, 00:20:34.650 "sequence_count": 2048, 00:20:34.650 "buf_count": 2048 00:20:34.650 } 00:20:34.650 } 00:20:34.650 ] 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "subsystem": "bdev", 00:20:34.650 "config": [ 00:20:34.650 { 00:20:34.650 "method": "bdev_set_options", 00:20:34.650 "params": { 00:20:34.650 "bdev_io_pool_size": 65535, 00:20:34.650 "bdev_io_cache_size": 256, 00:20:34.650 "bdev_auto_examine": true, 00:20:34.650 "iobuf_small_cache_size": 128, 00:20:34.650 "iobuf_large_cache_size": 16 00:20:34.650 } 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "method": "bdev_raid_set_options", 00:20:34.650 "params": { 00:20:34.650 "process_window_size_kb": 1024, 00:20:34.650 "process_max_bandwidth_mb_sec": 0 00:20:34.650 } 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "method": "bdev_iscsi_set_options", 00:20:34.650 "params": { 00:20:34.650 "timeout_sec": 30 00:20:34.650 } 00:20:34.650 }, 00:20:34.650 { 00:20:34.650 "method": "bdev_nvme_set_options", 00:20:34.650 "params": { 00:20:34.650 "action_on_timeout": "none", 00:20:34.650 "timeout_us": 0, 00:20:34.650 "timeout_admin_us": 0, 00:20:34.650 "keep_alive_timeout_ms": 10000, 00:20:34.650 "arbitration_burst": 0, 00:20:34.650 "low_priority_weight": 0, 00:20:34.650 "medium_priority_weight": 0, 00:20:34.650 "high_priority_weight": 0, 00:20:34.650 "nvme_adminq_poll_period_us": 10000, 00:20:34.650 "nvme_ioq_poll_period_us": 0, 00:20:34.650 "io_queue_requests": 512, 00:20:34.650 "delay_cmd_submit": true, 00:20:34.650 "transport_retry_count": 4, 00:20:34.650 "bdev_retry_count": 3, 00:20:34.650 "transport_ack_timeout": 0, 00:20:34.650 "ctrlr_loss_timeout_sec": 0, 00:20:34.650 "reconnect_delay_sec": 0, 00:20:34.650 "fast_io_fail_timeout_sec": 0, 00:20:34.650 "disable_auto_failback": false, 00:20:34.650 "generate_uuids": false, 00:20:34.650 "transport_tos": 0, 00:20:34.650 "nvme_error_stat": false, 00:20:34.650 "rdma_srq_size": 0, 00:20:34.650 "io_path_stat": false, 00:20:34.650 "allow_accel_sequence": false, 00:20:34.650 "rdma_max_cq_size": 0, 00:20:34.651 "rdma_cm_event_timeout_ms": 0, 00:20:34.651 "dhchap_digests": [ 00:20:34.651 "sha256", 00:20:34.651 "sha384", 00:20:34.651 "sha512" 00:20:34.651 ], 00:20:34.651 "dhchap_dhgroups": [ 00:20:34.651 "null", 00:20:34.651 "ffdhe2048", 00:20:34.651 "ffdhe3072", 00:20:34.651 "ffdhe4096", 00:20:34.651 "ffdhe6144", 00:20:34.651 "ffdhe8192" 00:20:34.651 ] 00:20:34.651 } 00:20:34.651 }, 00:20:34.651 { 00:20:34.651 "method": "bdev_nvme_attach_controller", 00:20:34.651 "params": { 00:20:34.651 "name": "TLSTEST", 00:20:34.651 "trtype": "TCP", 00:20:34.651 "adrfam": "IPv4", 00:20:34.651 "traddr": "10.0.0.2", 00:20:34.651 "trsvcid": "4420", 00:20:34.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.651 "prchk_reftag": false, 00:20:34.651 "prchk_guard": false, 00:20:34.651 "ctrlr_loss_timeout_sec": 0, 00:20:34.651 "reconnect_delay_sec": 0, 00:20:34.651 "fast_io_fail_timeout_sec": 0, 00:20:34.651 "psk": "key0", 00:20:34.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.651 "hdgst": false, 00:20:34.651 "ddgst": false, 00:20:34.651 "multipath": "multipath" 00:20:34.651 } 00:20:34.651 }, 00:20:34.651 { 00:20:34.651 "method": "bdev_nvme_set_hotplug", 00:20:34.651 "params": { 00:20:34.651 "period_us": 100000, 00:20:34.651 "enable": false 00:20:34.651 } 00:20:34.651 }, 00:20:34.651 { 00:20:34.651 "method": "bdev_wait_for_examine" 00:20:34.651 } 00:20:34.651 ] 00:20:34.651 }, 00:20:34.651 { 00:20:34.651 "subsystem": "nbd", 00:20:34.651 "config": [] 00:20:34.651 } 00:20:34.651 ] 00:20:34.651 }' 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1984681 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1984681 ']' 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1984681 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1984681 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1984681' 00:20:34.651 killing process with pid 1984681 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1984681 00:20:34.651 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.651 00:20:34.651 Latency(us) 00:20:34.651 [2024-11-20T16:03:26.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.651 [2024-11-20T16:03:26.827Z] =================================================================================================================== 00:20:34.651 [2024-11-20T16:03:26.827Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.651 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1984681 00:20:34.912 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1984315 00:20:34.912 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1984315 ']' 00:20:34.912 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1984315 00:20:34.912 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1984315 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1984315' 00:20:34.913 killing process with pid 1984315 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1984315 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1984315 00:20:34.913 17:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:34.913 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.913 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.913 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.913 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:34.913 "subsystems": [ 00:20:34.913 { 00:20:34.913 "subsystem": "keyring", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "keyring_file_add_key", 00:20:34.913 "params": { 00:20:34.913 "name": "key0", 00:20:34.913 "path": "/tmp/tmp.VjbBbOvcST" 00:20:34.913 } 00:20:34.913 } 00:20:34.913 ] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "iobuf", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "iobuf_set_options", 00:20:34.913 "params": { 00:20:34.913 "small_pool_count": 8192, 00:20:34.913 "large_pool_count": 1024, 00:20:34.913 "small_bufsize": 8192, 00:20:34.913 "large_bufsize": 135168, 00:20:34.913 "enable_numa": false 00:20:34.913 } 00:20:34.913 } 00:20:34.913 ] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "sock", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "sock_set_default_impl", 00:20:34.913 "params": { 00:20:34.913 "impl_name": "posix" 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "sock_impl_set_options", 00:20:34.913 "params": { 00:20:34.913 "impl_name": "ssl", 00:20:34.913 "recv_buf_size": 4096, 00:20:34.913 "send_buf_size": 4096, 00:20:34.913 "enable_recv_pipe": true, 00:20:34.913 "enable_quickack": false, 00:20:34.913 "enable_placement_id": 0, 00:20:34.913 "enable_zerocopy_send_server": true, 00:20:34.913 "enable_zerocopy_send_client": false, 00:20:34.913 "zerocopy_threshold": 0, 00:20:34.913 "tls_version": 0, 00:20:34.913 "enable_ktls": false 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "sock_impl_set_options", 00:20:34.913 "params": { 00:20:34.913 "impl_name": "posix", 00:20:34.913 "recv_buf_size": 2097152, 00:20:34.913 "send_buf_size": 2097152, 00:20:34.913 "enable_recv_pipe": true, 00:20:34.913 "enable_quickack": false, 00:20:34.913 "enable_placement_id": 0, 00:20:34.913 "enable_zerocopy_send_server": true, 00:20:34.913 "enable_zerocopy_send_client": false, 00:20:34.913 "zerocopy_threshold": 0, 00:20:34.913 "tls_version": 0, 00:20:34.913 "enable_ktls": false 00:20:34.913 } 00:20:34.913 } 00:20:34.913 ] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "vmd", 00:20:34.913 "config": [] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "accel", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "accel_set_options", 00:20:34.913 "params": { 00:20:34.913 "small_cache_size": 128, 00:20:34.913 "large_cache_size": 16, 00:20:34.913 "task_count": 2048, 00:20:34.913 "sequence_count": 2048, 00:20:34.913 "buf_count": 2048 00:20:34.913 } 00:20:34.913 } 00:20:34.913 ] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "bdev", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "bdev_set_options", 00:20:34.913 "params": { 00:20:34.913 "bdev_io_pool_size": 65535, 00:20:34.913 "bdev_io_cache_size": 256, 00:20:34.913 "bdev_auto_examine": true, 00:20:34.913 "iobuf_small_cache_size": 128, 00:20:34.913 "iobuf_large_cache_size": 16 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "bdev_raid_set_options", 00:20:34.913 "params": { 00:20:34.913 "process_window_size_kb": 1024, 00:20:34.913 "process_max_bandwidth_mb_sec": 0 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "bdev_iscsi_set_options", 00:20:34.913 "params": { 00:20:34.913 "timeout_sec": 30 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "bdev_nvme_set_options", 00:20:34.913 "params": { 00:20:34.913 "action_on_timeout": "none", 00:20:34.913 "timeout_us": 0, 00:20:34.913 "timeout_admin_us": 0, 00:20:34.913 "keep_alive_timeout_ms": 10000, 00:20:34.913 "arbitration_burst": 0, 00:20:34.913 "low_priority_weight": 0, 00:20:34.913 "medium_priority_weight": 0, 00:20:34.913 "high_priority_weight": 0, 00:20:34.913 "nvme_adminq_poll_period_us": 10000, 00:20:34.913 "nvme_ioq_poll_period_us": 0, 00:20:34.913 "io_queue_requests": 0, 00:20:34.913 "delay_cmd_submit": true, 00:20:34.913 "transport_retry_count": 4, 00:20:34.913 "bdev_retry_count": 3, 00:20:34.913 "transport_ack_timeout": 0, 00:20:34.913 "ctrlr_loss_timeout_sec": 0, 00:20:34.913 "reconnect_delay_sec": 0, 00:20:34.913 "fast_io_fail_timeout_sec": 0, 00:20:34.913 "disable_auto_failback": false, 00:20:34.913 "generate_uuids": false, 00:20:34.913 "transport_tos": 0, 00:20:34.913 "nvme_error_stat": false, 00:20:34.913 "rdma_srq_size": 0, 00:20:34.913 "io_path_stat": false, 00:20:34.913 "allow_accel_sequence": false, 00:20:34.913 "rdma_max_cq_size": 0, 00:20:34.913 "rdma_cm_event_timeout_ms": 0, 00:20:34.913 "dhchap_digests": [ 00:20:34.913 "sha256", 00:20:34.913 "sha384", 00:20:34.913 "sha512" 00:20:34.913 ], 00:20:34.913 "dhchap_dhgroups": [ 00:20:34.913 "null", 00:20:34.913 "ffdhe2048", 00:20:34.913 "ffdhe3072", 00:20:34.913 "ffdhe4096", 00:20:34.913 "ffdhe6144", 00:20:34.913 "ffdhe8192" 00:20:34.913 ] 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "bdev_nvme_set_hotplug", 00:20:34.913 "params": { 00:20:34.913 "period_us": 100000, 00:20:34.913 "enable": false 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "bdev_malloc_create", 00:20:34.913 "params": { 00:20:34.913 "name": "malloc0", 00:20:34.913 "num_blocks": 8192, 00:20:34.913 "block_size": 4096, 00:20:34.913 "physical_block_size": 4096, 00:20:34.913 "uuid": "fa4e8b7d-5099-43e5-a5e3-e3b303eb1dc5", 00:20:34.913 "optimal_io_boundary": 0, 00:20:34.913 "md_size": 0, 00:20:34.913 "dif_type": 0, 00:20:34.913 "dif_is_head_of_md": false, 00:20:34.913 "dif_pi_format": 0 00:20:34.913 } 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "method": "bdev_wait_for_examine" 00:20:34.913 } 00:20:34.913 ] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "nbd", 00:20:34.913 "config": [] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "scheduler", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "framework_set_scheduler", 00:20:34.913 "params": { 00:20:34.913 "name": "static" 00:20:34.913 } 00:20:34.913 } 00:20:34.913 ] 00:20:34.913 }, 00:20:34.913 { 00:20:34.913 "subsystem": "nvmf", 00:20:34.913 "config": [ 00:20:34.913 { 00:20:34.913 "method": "nvmf_set_config", 00:20:34.913 "params": { 00:20:34.913 "discovery_filter": "match_any", 00:20:34.913 "admin_cmd_passthru": { 00:20:34.913 "identify_ctrlr": false 00:20:34.913 }, 00:20:34.913 "dhchap_digests": [ 00:20:34.913 "sha256", 00:20:34.914 "sha384", 00:20:34.914 "sha512" 00:20:34.914 ], 00:20:34.914 "dhchap_dhgroups": [ 00:20:34.914 "null", 00:20:34.914 "ffdhe2048", 00:20:34.914 "ffdhe3072", 00:20:34.914 "ffdhe4096", 00:20:34.914 "ffdhe6144", 00:20:34.914 "ffdhe8192" 00:20:34.914 ] 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_set_max_subsystems", 00:20:34.914 "params": { 00:20:34.914 "max_subsystems": 1024 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_set_crdt", 00:20:34.914 "params": { 00:20:34.914 "crdt1": 0, 00:20:34.914 "crdt2": 0, 00:20:34.914 "crdt3": 0 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_create_transport", 00:20:34.914 "params": { 00:20:34.914 "trtype": "TCP", 00:20:34.914 "max_queue_depth": 128, 00:20:34.914 "max_io_qpairs_per_ctrlr": 127, 00:20:34.914 "in_capsule_data_size": 4096, 00:20:34.914 "max_io_size": 131072, 00:20:34.914 "io_unit_size": 131072, 00:20:34.914 "max_aq_depth": 128, 00:20:34.914 "num_shared_buffers": 511, 00:20:34.914 "buf_cache_size": 4294967295, 00:20:34.914 "dif_insert_or_strip": false, 00:20:34.914 "zcopy": false, 00:20:34.914 "c2h_success": false, 00:20:34.914 "sock_priority": 0, 00:20:34.914 "abort_timeout_sec": 1, 00:20:34.914 "ack_timeout": 0, 00:20:34.914 "data_wr_pool_size": 0 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_create_subsystem", 00:20:34.914 "params": { 00:20:34.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.914 "allow_any_host": false, 00:20:34.914 "serial_number": "SPDK00000000000001", 00:20:34.914 "model_number": "SPDK bdev Controller", 00:20:34.914 "max_namespaces": 10, 00:20:34.914 "min_cntlid": 1, 00:20:34.914 "max_cntlid": 65519, 00:20:34.914 "ana_reporting": false 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_subsystem_add_host", 00:20:34.914 "params": { 00:20:34.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.914 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.914 "psk": "key0" 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_subsystem_add_ns", 00:20:34.914 "params": { 00:20:34.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.914 "namespace": { 00:20:34.914 "nsid": 1, 00:20:34.914 "bdev_name": "malloc0", 00:20:34.914 "nguid": "FA4E8B7D509943E5A5E3E3B303EB1DC5", 00:20:34.914 "uuid": "fa4e8b7d-5099-43e5-a5e3-e3b303eb1dc5", 00:20:34.914 "no_auto_visible": false 00:20:34.914 } 00:20:34.914 } 00:20:34.914 }, 00:20:34.914 { 00:20:34.914 "method": "nvmf_subsystem_add_listener", 00:20:34.914 "params": { 00:20:34.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.914 "listen_address": { 00:20:34.914 "trtype": "TCP", 00:20:34.914 "adrfam": "IPv4", 00:20:34.914 "traddr": "10.0.0.2", 00:20:34.914 "trsvcid": "4420" 00:20:34.914 }, 00:20:34.914 "secure_channel": true 00:20:34.914 } 00:20:34.914 } 00:20:34.914 ] 00:20:34.914 } 00:20:34.914 ] 00:20:34.914 }' 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1985041 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1985041 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1985041 ']' 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.914 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.914 [2024-11-20 17:03:27.075081] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:34.914 [2024-11-20 17:03:27.075140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.176 [2024-11-20 17:03:27.166937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.176 [2024-11-20 17:03:27.195631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.176 [2024-11-20 17:03:27.195657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.176 [2024-11-20 17:03:27.195662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.176 [2024-11-20 17:03:27.195667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.176 [2024-11-20 17:03:27.195671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.176 [2024-11-20 17:03:27.196143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.436 [2024-11-20 17:03:27.389561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.436 [2024-11-20 17:03:27.421586] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.436 [2024-11-20 17:03:27.421781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.696 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.696 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.696 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.696 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.696 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1985383 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1985383 /var/tmp/bdevperf.sock 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1985383 ']' 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.956 17:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:35.956 "subsystems": [ 00:20:35.956 { 00:20:35.956 "subsystem": "keyring", 00:20:35.957 "config": [ 00:20:35.957 { 00:20:35.957 "method": "keyring_file_add_key", 00:20:35.957 "params": { 00:20:35.957 "name": "key0", 00:20:35.957 "path": "/tmp/tmp.VjbBbOvcST" 00:20:35.957 } 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "subsystem": "iobuf", 00:20:35.957 "config": [ 00:20:35.957 { 00:20:35.957 "method": "iobuf_set_options", 00:20:35.957 "params": { 00:20:35.957 "small_pool_count": 8192, 00:20:35.957 "large_pool_count": 1024, 00:20:35.957 "small_bufsize": 8192, 00:20:35.957 "large_bufsize": 135168, 00:20:35.957 "enable_numa": false 00:20:35.957 } 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "subsystem": "sock", 00:20:35.957 "config": [ 00:20:35.957 { 00:20:35.957 "method": "sock_set_default_impl", 00:20:35.957 "params": { 00:20:35.957 "impl_name": "posix" 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "sock_impl_set_options", 00:20:35.957 "params": { 00:20:35.957 "impl_name": "ssl", 00:20:35.957 "recv_buf_size": 4096, 00:20:35.957 "send_buf_size": 4096, 00:20:35.957 "enable_recv_pipe": true, 00:20:35.957 "enable_quickack": false, 00:20:35.957 "enable_placement_id": 0, 00:20:35.957 "enable_zerocopy_send_server": true, 00:20:35.957 "enable_zerocopy_send_client": false, 00:20:35.957 "zerocopy_threshold": 0, 00:20:35.957 "tls_version": 0, 00:20:35.957 "enable_ktls": false 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "sock_impl_set_options", 00:20:35.957 "params": { 00:20:35.957 "impl_name": "posix", 00:20:35.957 "recv_buf_size": 2097152, 00:20:35.957 "send_buf_size": 2097152, 00:20:35.957 "enable_recv_pipe": true, 00:20:35.957 "enable_quickack": false, 00:20:35.957 "enable_placement_id": 0, 00:20:35.957 "enable_zerocopy_send_server": true, 00:20:35.957 "enable_zerocopy_send_client": false, 00:20:35.957 "zerocopy_threshold": 0, 00:20:35.957 "tls_version": 0, 00:20:35.957 "enable_ktls": false 00:20:35.957 } 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "subsystem": "vmd", 00:20:35.957 "config": [] 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "subsystem": "accel", 00:20:35.957 "config": [ 00:20:35.957 { 00:20:35.957 "method": "accel_set_options", 00:20:35.957 "params": { 00:20:35.957 "small_cache_size": 128, 00:20:35.957 "large_cache_size": 16, 00:20:35.957 "task_count": 2048, 00:20:35.957 "sequence_count": 2048, 00:20:35.957 "buf_count": 2048 00:20:35.957 } 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "subsystem": "bdev", 00:20:35.957 "config": [ 00:20:35.957 { 00:20:35.957 "method": "bdev_set_options", 00:20:35.957 "params": { 00:20:35.957 "bdev_io_pool_size": 65535, 00:20:35.957 "bdev_io_cache_size": 256, 00:20:35.957 "bdev_auto_examine": true, 00:20:35.957 "iobuf_small_cache_size": 128, 00:20:35.957 "iobuf_large_cache_size": 16 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "bdev_raid_set_options", 00:20:35.957 "params": { 00:20:35.957 "process_window_size_kb": 1024, 00:20:35.957 "process_max_bandwidth_mb_sec": 0 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "bdev_iscsi_set_options", 00:20:35.957 "params": { 00:20:35.957 "timeout_sec": 30 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "bdev_nvme_set_options", 00:20:35.957 "params": { 00:20:35.957 "action_on_timeout": "none", 00:20:35.957 "timeout_us": 0, 00:20:35.957 "timeout_admin_us": 0, 00:20:35.957 "keep_alive_timeout_ms": 10000, 00:20:35.957 "arbitration_burst": 0, 00:20:35.957 "low_priority_weight": 0, 00:20:35.957 "medium_priority_weight": 0, 00:20:35.957 "high_priority_weight": 0, 00:20:35.957 "nvme_adminq_poll_period_us": 10000, 00:20:35.957 "nvme_ioq_poll_period_us": 0, 00:20:35.957 "io_queue_requests": 512, 00:20:35.957 "delay_cmd_submit": true, 00:20:35.957 "transport_retry_count": 4, 00:20:35.957 "bdev_retry_count": 3, 00:20:35.957 "transport_ack_timeout": 0, 00:20:35.957 "ctrlr_loss_timeout_sec": 0, 00:20:35.957 "reconnect_delay_sec": 0, 00:20:35.957 "fast_io_fail_timeout_sec": 0, 00:20:35.957 "disable_auto_failback": false, 00:20:35.957 "generate_uuids": false, 00:20:35.957 "transport_tos": 0, 00:20:35.957 "nvme_error_stat": false, 00:20:35.957 "rdma_srq_size": 0, 00:20:35.957 "io_path_stat": false, 00:20:35.957 "allow_accel_sequence": false, 00:20:35.957 "rdma_max_cq_size": 0, 00:20:35.957 "rdma_cm_event_timeout_ms": 0, 00:20:35.957 "dhchap_digests": [ 00:20:35.957 "sha256", 00:20:35.957 "sha384", 00:20:35.957 "sha512" 00:20:35.957 ], 00:20:35.957 "dhchap_dhgroups": [ 00:20:35.957 "null", 00:20:35.957 "ffdhe2048", 00:20:35.957 "ffdhe3072", 00:20:35.957 "ffdhe4096", 00:20:35.957 "ffdhe6144", 00:20:35.957 "ffdhe8192" 00:20:35.957 ] 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "bdev_nvme_attach_controller", 00:20:35.957 "params": { 00:20:35.957 "name": "TLSTEST", 00:20:35.957 "trtype": "TCP", 00:20:35.957 "adrfam": "IPv4", 00:20:35.957 "traddr": "10.0.0.2", 00:20:35.957 "trsvcid": "4420", 00:20:35.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.957 "prchk_reftag": false, 00:20:35.957 "prchk_guard": false, 00:20:35.957 "ctrlr_loss_timeout_sec": 0, 00:20:35.957 "reconnect_delay_sec": 0, 00:20:35.957 "fast_io_fail_timeout_sec": 0, 00:20:35.957 "psk": "key0", 00:20:35.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.957 "hdgst": false, 00:20:35.957 "ddgst": false, 00:20:35.957 "multipath": "multipath" 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "bdev_nvme_set_hotplug", 00:20:35.957 "params": { 00:20:35.957 "period_us": 100000, 00:20:35.957 "enable": false 00:20:35.957 } 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "method": "bdev_wait_for_examine" 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "subsystem": "nbd", 00:20:35.957 "config": [] 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }' 00:20:35.957 [2024-11-20 17:03:27.937040] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:35.957 [2024-11-20 17:03:27.937094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985383 ] 00:20:35.958 [2024-11-20 17:03:28.021623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.958 [2024-11-20 17:03:28.050831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.217 [2024-11-20 17:03:28.185858] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.787 17:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.787 17:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.787 17:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:36.787 Running I/O for 10 seconds... 00:20:38.675 4942.00 IOPS, 19.30 MiB/s [2024-11-20T16:03:32.235Z] 5059.50 IOPS, 19.76 MiB/s [2024-11-20T16:03:33.174Z] 5387.67 IOPS, 21.05 MiB/s [2024-11-20T16:03:34.116Z] 5352.25 IOPS, 20.91 MiB/s [2024-11-20T16:03:35.057Z] 5382.20 IOPS, 21.02 MiB/s [2024-11-20T16:03:35.998Z] 5278.50 IOPS, 20.62 MiB/s [2024-11-20T16:03:36.940Z] 5420.71 IOPS, 21.17 MiB/s [2024-11-20T16:03:37.880Z] 5542.25 IOPS, 21.65 MiB/s [2024-11-20T16:03:39.264Z] 5623.44 IOPS, 21.97 MiB/s [2024-11-20T16:03:39.264Z] 5557.30 IOPS, 21.71 MiB/s 00:20:47.088 Latency(us) 00:20:47.088 [2024-11-20T16:03:39.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.088 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.088 Verification LBA range: start 0x0 length 0x2000 00:20:47.088 TLSTESTn1 : 10.04 5548.65 21.67 0.00 0.00 23012.00 6417.07 71652.69 00:20:47.088 [2024-11-20T16:03:39.264Z] =================================================================================================================== 00:20:47.088 [2024-11-20T16:03:39.264Z] Total : 5548.65 21.67 0.00 0.00 23012.00 6417.07 71652.69 00:20:47.088 { 00:20:47.088 "results": [ 00:20:47.088 { 00:20:47.088 "job": "TLSTESTn1", 00:20:47.088 "core_mask": "0x4", 00:20:47.088 "workload": "verify", 00:20:47.088 "status": "finished", 00:20:47.088 "verify_range": { 00:20:47.088 "start": 0, 00:20:47.088 "length": 8192 00:20:47.088 }, 00:20:47.088 "queue_depth": 128, 00:20:47.088 "io_size": 4096, 00:20:47.088 "runtime": 10.038663, 00:20:47.088 "iops": 5548.647265079025, 00:20:47.088 "mibps": 21.67440337921494, 00:20:47.088 "io_failed": 0, 00:20:47.088 "io_timeout": 0, 00:20:47.088 "avg_latency_us": 23012.00442840643, 00:20:47.088 "min_latency_us": 6417.066666666667, 00:20:47.088 "max_latency_us": 71652.69333333333 00:20:47.088 } 00:20:47.088 ], 00:20:47.088 "core_count": 1 00:20:47.088 } 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1985383 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1985383 ']' 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1985383 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1985383 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1985383' 00:20:47.088 killing process with pid 1985383 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1985383 00:20:47.088 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.088 00:20:47.088 Latency(us) 00:20:47.088 [2024-11-20T16:03:39.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.088 [2024-11-20T16:03:39.264Z] =================================================================================================================== 00:20:47.088 [2024-11-20T16:03:39.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.088 17:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1985383 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1985041 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1985041 ']' 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1985041 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1985041 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1985041' 00:20:47.088 killing process with pid 1985041 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1985041 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1985041 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1987426 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1987426 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1987426 ']' 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.088 17:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 [2024-11-20 17:03:39.297678] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:47.349 [2024-11-20 17:03:39.297738] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.349 [2024-11-20 17:03:39.392988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.349 [2024-11-20 17:03:39.442049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.349 [2024-11-20 17:03:39.442102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.349 [2024-11-20 17:03:39.442115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.349 [2024-11-20 17:03:39.442126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.349 [2024-11-20 17:03:39.442136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.349 [2024-11-20 17:03:39.442931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.VjbBbOvcST 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VjbBbOvcST 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.291 [2024-11-20 17:03:40.322905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.291 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:48.552 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:48.552 [2024-11-20 17:03:40.695849] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.552 [2024-11-20 17:03:40.696215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.552 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:48.813 malloc0 00:20:48.813 17:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.074 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1987889 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1987889 /var/tmp/bdevperf.sock 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1987889 ']' 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.335 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.596 [2024-11-20 17:03:41.518911] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:49.596 [2024-11-20 17:03:41.518981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987889 ] 00:20:49.596 [2024-11-20 17:03:41.605996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.596 [2024-11-20 17:03:41.640779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.596 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.596 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:49.596 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:49.857 17:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:49.857 [2024-11-20 17:03:42.026309] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.117 nvme0n1 00:20:50.117 17:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:50.117 Running I/O for 1 seconds... 00:20:51.058 5234.00 IOPS, 20.45 MiB/s 00:20:51.058 Latency(us) 00:20:51.058 [2024-11-20T16:03:43.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.058 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.058 Verification LBA range: start 0x0 length 0x2000 00:20:51.058 nvme0n1 : 1.01 5283.94 20.64 0.00 0.00 24038.13 5406.72 31238.83 00:20:51.058 [2024-11-20T16:03:43.234Z] =================================================================================================================== 00:20:51.058 [2024-11-20T16:03:43.234Z] Total : 5283.94 20.64 0.00 0.00 24038.13 5406.72 31238.83 00:20:51.058 { 00:20:51.058 "results": [ 00:20:51.058 { 00:20:51.058 "job": "nvme0n1", 00:20:51.058 "core_mask": "0x2", 00:20:51.058 "workload": "verify", 00:20:51.058 "status": "finished", 00:20:51.058 "verify_range": { 00:20:51.058 "start": 0, 00:20:51.058 "length": 8192 00:20:51.058 }, 00:20:51.058 "queue_depth": 128, 00:20:51.058 "io_size": 4096, 00:20:51.058 "runtime": 1.014773, 00:20:51.058 "iops": 5283.9403492209585, 00:20:51.058 "mibps": 20.64039198914437, 00:20:51.058 "io_failed": 0, 00:20:51.058 "io_timeout": 0, 00:20:51.058 "avg_latency_us": 24038.132000497328, 00:20:51.058 "min_latency_us": 5406.72, 00:20:51.058 "max_latency_us": 31238.826666666668 00:20:51.058 } 00:20:51.058 ], 00:20:51.058 "core_count": 1 00:20:51.058 } 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1987889 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1987889 ']' 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1987889 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987889 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987889' 00:20:51.318 killing process with pid 1987889 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1987889 00:20:51.318 Received shutdown signal, test time was about 1.000000 seconds 00:20:51.318 00:20:51.318 Latency(us) 00:20:51.318 [2024-11-20T16:03:43.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.318 [2024-11-20T16:03:43.494Z] =================================================================================================================== 00:20:51.318 [2024-11-20T16:03:43.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1987889 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1987426 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1987426 ']' 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1987426 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987426 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987426' 00:20:51.318 killing process with pid 1987426 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1987426 00:20:51.318 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1987426 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1988424 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1988424 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1988424 ']' 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.579 17:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.579 [2024-11-20 17:03:43.670560] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:51.579 [2024-11-20 17:03:43.670622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.841 [2024-11-20 17:03:43.768004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.841 [2024-11-20 17:03:43.817913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.841 [2024-11-20 17:03:43.817987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.841 [2024-11-20 17:03:43.817996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.841 [2024-11-20 17:03:43.818003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.841 [2024-11-20 17:03:43.818009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.841 [2024-11-20 17:03:43.818825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.413 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.413 [2024-11-20 17:03:44.529714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.413 malloc0 00:20:52.413 [2024-11-20 17:03:44.559740] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.413 [2024-11-20 17:03:44.560086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1988493 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1988493 /var/tmp/bdevperf.sock 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1988493 ']' 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.674 17:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.674 [2024-11-20 17:03:44.641943] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:52.674 [2024-11-20 17:03:44.642002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988493 ] 00:20:52.674 [2024-11-20 17:03:44.728391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.674 [2024-11-20 17:03:44.762530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.617 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.617 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:53.617 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VjbBbOvcST 00:20:53.617 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:53.617 [2024-11-20 17:03:45.757471] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.876 nvme0n1 00:20:53.876 17:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:53.876 Running I/O for 1 seconds... 00:20:54.816 5251.00 IOPS, 20.51 MiB/s 00:20:54.816 Latency(us) 00:20:54.816 [2024-11-20T16:03:46.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.816 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:54.816 Verification LBA range: start 0x0 length 0x2000 00:20:54.816 nvme0n1 : 1.01 5315.97 20.77 0.00 0.00 23928.60 4915.20 32986.45 00:20:54.816 [2024-11-20T16:03:46.992Z] =================================================================================================================== 00:20:54.816 [2024-11-20T16:03:46.992Z] Total : 5315.97 20.77 0.00 0.00 23928.60 4915.20 32986.45 00:20:54.816 { 00:20:54.816 "results": [ 00:20:54.816 { 00:20:54.816 "job": "nvme0n1", 00:20:54.816 "core_mask": "0x2", 00:20:54.817 "workload": "verify", 00:20:54.817 "status": "finished", 00:20:54.817 "verify_range": { 00:20:54.817 "start": 0, 00:20:54.817 "length": 8192 00:20:54.817 }, 00:20:54.817 "queue_depth": 128, 00:20:54.817 "io_size": 4096, 00:20:54.817 "runtime": 1.012044, 00:20:54.817 "iops": 5315.974404274913, 00:20:54.817 "mibps": 20.765525016698877, 00:20:54.817 "io_failed": 0, 00:20:54.817 "io_timeout": 0, 00:20:54.817 "avg_latency_us": 23928.601060718713, 00:20:54.817 "min_latency_us": 4915.2, 00:20:54.817 "max_latency_us": 32986.45333333333 00:20:54.817 } 00:20:54.817 ], 00:20:54.817 "core_count": 1 00:20:54.817 } 00:20:54.817 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:54.817 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.817 17:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.078 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.078 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:55.078 "subsystems": [ 00:20:55.078 { 00:20:55.078 "subsystem": "keyring", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "keyring_file_add_key", 00:20:55.078 "params": { 00:20:55.078 "name": "key0", 00:20:55.078 "path": "/tmp/tmp.VjbBbOvcST" 00:20:55.078 } 00:20:55.078 } 00:20:55.078 ] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "iobuf", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "iobuf_set_options", 00:20:55.078 "params": { 00:20:55.078 "small_pool_count": 8192, 00:20:55.078 "large_pool_count": 1024, 00:20:55.078 "small_bufsize": 8192, 00:20:55.078 "large_bufsize": 135168, 00:20:55.078 "enable_numa": false 00:20:55.078 } 00:20:55.078 } 00:20:55.078 ] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "sock", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "sock_set_default_impl", 00:20:55.078 "params": { 00:20:55.078 "impl_name": "posix" 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "sock_impl_set_options", 00:20:55.078 "params": { 00:20:55.078 "impl_name": "ssl", 00:20:55.078 "recv_buf_size": 4096, 00:20:55.078 "send_buf_size": 4096, 00:20:55.078 "enable_recv_pipe": true, 00:20:55.078 "enable_quickack": false, 00:20:55.078 "enable_placement_id": 0, 00:20:55.078 "enable_zerocopy_send_server": true, 00:20:55.078 "enable_zerocopy_send_client": false, 00:20:55.078 "zerocopy_threshold": 0, 00:20:55.078 "tls_version": 0, 00:20:55.078 "enable_ktls": false 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "sock_impl_set_options", 00:20:55.078 "params": { 00:20:55.078 "impl_name": "posix", 00:20:55.078 "recv_buf_size": 2097152, 00:20:55.078 "send_buf_size": 2097152, 00:20:55.078 "enable_recv_pipe": true, 00:20:55.078 "enable_quickack": false, 00:20:55.078 "enable_placement_id": 0, 00:20:55.078 "enable_zerocopy_send_server": true, 00:20:55.078 "enable_zerocopy_send_client": false, 00:20:55.078 "zerocopy_threshold": 0, 00:20:55.078 "tls_version": 0, 00:20:55.078 "enable_ktls": false 00:20:55.078 } 00:20:55.078 } 00:20:55.078 ] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "vmd", 00:20:55.078 "config": [] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "accel", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "accel_set_options", 00:20:55.078 "params": { 00:20:55.078 "small_cache_size": 128, 00:20:55.078 "large_cache_size": 16, 00:20:55.078 "task_count": 2048, 00:20:55.078 "sequence_count": 2048, 00:20:55.078 "buf_count": 2048 00:20:55.078 } 00:20:55.078 } 00:20:55.078 ] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "bdev", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "bdev_set_options", 00:20:55.078 "params": { 00:20:55.078 "bdev_io_pool_size": 65535, 00:20:55.078 "bdev_io_cache_size": 256, 00:20:55.078 "bdev_auto_examine": true, 00:20:55.078 "iobuf_small_cache_size": 128, 00:20:55.078 "iobuf_large_cache_size": 16 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "bdev_raid_set_options", 00:20:55.078 "params": { 00:20:55.078 "process_window_size_kb": 1024, 00:20:55.078 "process_max_bandwidth_mb_sec": 0 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "bdev_iscsi_set_options", 00:20:55.078 "params": { 00:20:55.078 "timeout_sec": 30 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "bdev_nvme_set_options", 00:20:55.078 "params": { 00:20:55.078 "action_on_timeout": "none", 00:20:55.078 "timeout_us": 0, 00:20:55.078 "timeout_admin_us": 0, 00:20:55.078 "keep_alive_timeout_ms": 10000, 00:20:55.078 "arbitration_burst": 0, 00:20:55.078 "low_priority_weight": 0, 00:20:55.078 "medium_priority_weight": 0, 00:20:55.078 "high_priority_weight": 0, 00:20:55.078 "nvme_adminq_poll_period_us": 10000, 00:20:55.078 "nvme_ioq_poll_period_us": 0, 00:20:55.078 "io_queue_requests": 0, 00:20:55.078 "delay_cmd_submit": true, 00:20:55.078 "transport_retry_count": 4, 00:20:55.078 "bdev_retry_count": 3, 00:20:55.078 "transport_ack_timeout": 0, 00:20:55.078 "ctrlr_loss_timeout_sec": 0, 00:20:55.078 "reconnect_delay_sec": 0, 00:20:55.078 "fast_io_fail_timeout_sec": 0, 00:20:55.078 "disable_auto_failback": false, 00:20:55.078 "generate_uuids": false, 00:20:55.078 "transport_tos": 0, 00:20:55.078 "nvme_error_stat": false, 00:20:55.078 "rdma_srq_size": 0, 00:20:55.078 "io_path_stat": false, 00:20:55.078 "allow_accel_sequence": false, 00:20:55.078 "rdma_max_cq_size": 0, 00:20:55.078 "rdma_cm_event_timeout_ms": 0, 00:20:55.078 "dhchap_digests": [ 00:20:55.078 "sha256", 00:20:55.078 "sha384", 00:20:55.078 "sha512" 00:20:55.078 ], 00:20:55.078 "dhchap_dhgroups": [ 00:20:55.078 "null", 00:20:55.078 "ffdhe2048", 00:20:55.078 "ffdhe3072", 00:20:55.078 "ffdhe4096", 00:20:55.078 "ffdhe6144", 00:20:55.078 "ffdhe8192" 00:20:55.078 ] 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "bdev_nvme_set_hotplug", 00:20:55.078 "params": { 00:20:55.078 "period_us": 100000, 00:20:55.078 "enable": false 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "bdev_malloc_create", 00:20:55.078 "params": { 00:20:55.078 "name": "malloc0", 00:20:55.078 "num_blocks": 8192, 00:20:55.078 "block_size": 4096, 00:20:55.078 "physical_block_size": 4096, 00:20:55.078 "uuid": "14019061-c34c-4fe5-848b-72abcdf46188", 00:20:55.078 "optimal_io_boundary": 0, 00:20:55.078 "md_size": 0, 00:20:55.078 "dif_type": 0, 00:20:55.078 "dif_is_head_of_md": false, 00:20:55.078 "dif_pi_format": 0 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "bdev_wait_for_examine" 00:20:55.078 } 00:20:55.078 ] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "nbd", 00:20:55.078 "config": [] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "scheduler", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "framework_set_scheduler", 00:20:55.078 "params": { 00:20:55.078 "name": "static" 00:20:55.078 } 00:20:55.078 } 00:20:55.078 ] 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "subsystem": "nvmf", 00:20:55.078 "config": [ 00:20:55.078 { 00:20:55.078 "method": "nvmf_set_config", 00:20:55.078 "params": { 00:20:55.078 "discovery_filter": "match_any", 00:20:55.078 "admin_cmd_passthru": { 00:20:55.078 "identify_ctrlr": false 00:20:55.078 }, 00:20:55.078 "dhchap_digests": [ 00:20:55.078 "sha256", 00:20:55.078 "sha384", 00:20:55.078 "sha512" 00:20:55.078 ], 00:20:55.078 "dhchap_dhgroups": [ 00:20:55.078 "null", 00:20:55.078 "ffdhe2048", 00:20:55.078 "ffdhe3072", 00:20:55.078 "ffdhe4096", 00:20:55.078 "ffdhe6144", 00:20:55.078 "ffdhe8192" 00:20:55.078 ] 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "nvmf_set_max_subsystems", 00:20:55.078 "params": { 00:20:55.078 "max_subsystems": 1024 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "nvmf_set_crdt", 00:20:55.078 "params": { 00:20:55.078 "crdt1": 0, 00:20:55.078 "crdt2": 0, 00:20:55.078 "crdt3": 0 00:20:55.078 } 00:20:55.078 }, 00:20:55.078 { 00:20:55.078 "method": "nvmf_create_transport", 00:20:55.078 "params": { 00:20:55.078 "trtype": "TCP", 00:20:55.078 "max_queue_depth": 128, 00:20:55.078 "max_io_qpairs_per_ctrlr": 127, 00:20:55.078 "in_capsule_data_size": 4096, 00:20:55.078 "max_io_size": 131072, 00:20:55.078 "io_unit_size": 131072, 00:20:55.078 "max_aq_depth": 128, 00:20:55.078 "num_shared_buffers": 511, 00:20:55.078 "buf_cache_size": 4294967295, 00:20:55.078 "dif_insert_or_strip": false, 00:20:55.078 "zcopy": false, 00:20:55.078 "c2h_success": false, 00:20:55.078 "sock_priority": 0, 00:20:55.078 "abort_timeout_sec": 1, 00:20:55.078 "ack_timeout": 0, 00:20:55.078 "data_wr_pool_size": 0 00:20:55.079 } 00:20:55.079 }, 00:20:55.079 { 00:20:55.079 "method": "nvmf_create_subsystem", 00:20:55.079 "params": { 00:20:55.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.079 "allow_any_host": false, 00:20:55.079 "serial_number": "00000000000000000000", 00:20:55.079 "model_number": "SPDK bdev Controller", 00:20:55.079 "max_namespaces": 32, 00:20:55.079 "min_cntlid": 1, 00:20:55.079 "max_cntlid": 65519, 00:20:55.079 "ana_reporting": false 00:20:55.079 } 00:20:55.079 }, 00:20:55.079 { 00:20:55.079 "method": "nvmf_subsystem_add_host", 00:20:55.079 "params": { 00:20:55.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.079 "host": "nqn.2016-06.io.spdk:host1", 00:20:55.079 "psk": "key0" 00:20:55.079 } 00:20:55.079 }, 00:20:55.079 { 00:20:55.079 "method": "nvmf_subsystem_add_ns", 00:20:55.079 "params": { 00:20:55.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.079 "namespace": { 00:20:55.079 "nsid": 1, 00:20:55.079 "bdev_name": "malloc0", 00:20:55.079 "nguid": "14019061C34C4FE5848B72ABCDF46188", 00:20:55.079 "uuid": "14019061-c34c-4fe5-848b-72abcdf46188", 00:20:55.079 "no_auto_visible": false 00:20:55.079 } 00:20:55.079 } 00:20:55.079 }, 00:20:55.079 { 00:20:55.079 "method": "nvmf_subsystem_add_listener", 00:20:55.079 "params": { 00:20:55.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.079 "listen_address": { 00:20:55.079 "trtype": "TCP", 00:20:55.079 "adrfam": "IPv4", 00:20:55.079 "traddr": "10.0.0.2", 00:20:55.079 "trsvcid": "4420" 00:20:55.079 }, 00:20:55.079 "secure_channel": false, 00:20:55.079 "sock_impl": "ssl" 00:20:55.079 } 00:20:55.079 } 00:20:55.079 ] 00:20:55.079 } 00:20:55.079 ] 00:20:55.079 }' 00:20:55.079 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:55.341 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:55.341 "subsystems": [ 00:20:55.341 { 00:20:55.341 "subsystem": "keyring", 00:20:55.341 "config": [ 00:20:55.341 { 00:20:55.341 "method": "keyring_file_add_key", 00:20:55.341 "params": { 00:20:55.341 "name": "key0", 00:20:55.341 "path": "/tmp/tmp.VjbBbOvcST" 00:20:55.341 } 00:20:55.341 } 00:20:55.341 ] 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "subsystem": "iobuf", 00:20:55.341 "config": [ 00:20:55.341 { 00:20:55.341 "method": "iobuf_set_options", 00:20:55.341 "params": { 00:20:55.341 "small_pool_count": 8192, 00:20:55.341 "large_pool_count": 1024, 00:20:55.341 "small_bufsize": 8192, 00:20:55.341 "large_bufsize": 135168, 00:20:55.341 "enable_numa": false 00:20:55.341 } 00:20:55.341 } 00:20:55.341 ] 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "subsystem": "sock", 00:20:55.341 "config": [ 00:20:55.341 { 00:20:55.341 "method": "sock_set_default_impl", 00:20:55.341 "params": { 00:20:55.341 "impl_name": "posix" 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "sock_impl_set_options", 00:20:55.341 "params": { 00:20:55.341 "impl_name": "ssl", 00:20:55.341 "recv_buf_size": 4096, 00:20:55.341 "send_buf_size": 4096, 00:20:55.341 "enable_recv_pipe": true, 00:20:55.341 "enable_quickack": false, 00:20:55.341 "enable_placement_id": 0, 00:20:55.341 "enable_zerocopy_send_server": true, 00:20:55.341 "enable_zerocopy_send_client": false, 00:20:55.341 "zerocopy_threshold": 0, 00:20:55.341 "tls_version": 0, 00:20:55.341 "enable_ktls": false 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "sock_impl_set_options", 00:20:55.341 "params": { 00:20:55.341 "impl_name": "posix", 00:20:55.341 "recv_buf_size": 2097152, 00:20:55.341 "send_buf_size": 2097152, 00:20:55.341 "enable_recv_pipe": true, 00:20:55.341 "enable_quickack": false, 00:20:55.341 "enable_placement_id": 0, 00:20:55.341 "enable_zerocopy_send_server": true, 00:20:55.341 "enable_zerocopy_send_client": false, 00:20:55.341 "zerocopy_threshold": 0, 00:20:55.341 "tls_version": 0, 00:20:55.341 "enable_ktls": false 00:20:55.341 } 00:20:55.341 } 00:20:55.341 ] 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "subsystem": "vmd", 00:20:55.341 "config": [] 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "subsystem": "accel", 00:20:55.341 "config": [ 00:20:55.341 { 00:20:55.341 "method": "accel_set_options", 00:20:55.341 "params": { 00:20:55.341 "small_cache_size": 128, 00:20:55.341 "large_cache_size": 16, 00:20:55.341 "task_count": 2048, 00:20:55.341 "sequence_count": 2048, 00:20:55.341 "buf_count": 2048 00:20:55.341 } 00:20:55.341 } 00:20:55.341 ] 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "subsystem": "bdev", 00:20:55.341 "config": [ 00:20:55.341 { 00:20:55.341 "method": "bdev_set_options", 00:20:55.341 "params": { 00:20:55.341 "bdev_io_pool_size": 65535, 00:20:55.341 "bdev_io_cache_size": 256, 00:20:55.341 "bdev_auto_examine": true, 00:20:55.341 "iobuf_small_cache_size": 128, 00:20:55.341 "iobuf_large_cache_size": 16 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "bdev_raid_set_options", 00:20:55.341 "params": { 00:20:55.341 "process_window_size_kb": 1024, 00:20:55.341 "process_max_bandwidth_mb_sec": 0 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "bdev_iscsi_set_options", 00:20:55.341 "params": { 00:20:55.341 "timeout_sec": 30 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "bdev_nvme_set_options", 00:20:55.341 "params": { 00:20:55.341 "action_on_timeout": "none", 00:20:55.341 "timeout_us": 0, 00:20:55.341 "timeout_admin_us": 0, 00:20:55.341 "keep_alive_timeout_ms": 10000, 00:20:55.341 "arbitration_burst": 0, 00:20:55.341 "low_priority_weight": 0, 00:20:55.341 "medium_priority_weight": 0, 00:20:55.341 "high_priority_weight": 0, 00:20:55.341 "nvme_adminq_poll_period_us": 10000, 00:20:55.341 "nvme_ioq_poll_period_us": 0, 00:20:55.341 "io_queue_requests": 512, 00:20:55.341 "delay_cmd_submit": true, 00:20:55.341 "transport_retry_count": 4, 00:20:55.341 "bdev_retry_count": 3, 00:20:55.341 "transport_ack_timeout": 0, 00:20:55.341 "ctrlr_loss_timeout_sec": 0, 00:20:55.341 "reconnect_delay_sec": 0, 00:20:55.341 "fast_io_fail_timeout_sec": 0, 00:20:55.341 "disable_auto_failback": false, 00:20:55.341 "generate_uuids": false, 00:20:55.341 "transport_tos": 0, 00:20:55.341 "nvme_error_stat": false, 00:20:55.341 "rdma_srq_size": 0, 00:20:55.341 "io_path_stat": false, 00:20:55.341 "allow_accel_sequence": false, 00:20:55.341 "rdma_max_cq_size": 0, 00:20:55.341 "rdma_cm_event_timeout_ms": 0, 00:20:55.341 "dhchap_digests": [ 00:20:55.341 "sha256", 00:20:55.341 "sha384", 00:20:55.341 "sha512" 00:20:55.341 ], 00:20:55.341 "dhchap_dhgroups": [ 00:20:55.341 "null", 00:20:55.341 "ffdhe2048", 00:20:55.341 "ffdhe3072", 00:20:55.341 "ffdhe4096", 00:20:55.341 "ffdhe6144", 00:20:55.341 "ffdhe8192" 00:20:55.341 ] 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "bdev_nvme_attach_controller", 00:20:55.341 "params": { 00:20:55.341 "name": "nvme0", 00:20:55.341 "trtype": "TCP", 00:20:55.341 "adrfam": "IPv4", 00:20:55.341 "traddr": "10.0.0.2", 00:20:55.341 "trsvcid": "4420", 00:20:55.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.341 "prchk_reftag": false, 00:20:55.341 "prchk_guard": false, 00:20:55.341 "ctrlr_loss_timeout_sec": 0, 00:20:55.341 "reconnect_delay_sec": 0, 00:20:55.341 "fast_io_fail_timeout_sec": 0, 00:20:55.341 "psk": "key0", 00:20:55.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.341 "hdgst": false, 00:20:55.341 "ddgst": false, 00:20:55.341 "multipath": "multipath" 00:20:55.341 } 00:20:55.341 }, 00:20:55.341 { 00:20:55.341 "method": "bdev_nvme_set_hotplug", 00:20:55.341 "params": { 00:20:55.341 "period_us": 100000, 00:20:55.341 "enable": false 00:20:55.342 } 00:20:55.342 }, 00:20:55.342 { 00:20:55.342 "method": "bdev_enable_histogram", 00:20:55.342 "params": { 00:20:55.342 "name": "nvme0n1", 00:20:55.342 "enable": true 00:20:55.342 } 00:20:55.342 }, 00:20:55.342 { 00:20:55.342 "method": "bdev_wait_for_examine" 00:20:55.342 } 00:20:55.342 ] 00:20:55.342 }, 00:20:55.342 { 00:20:55.342 "subsystem": "nbd", 00:20:55.342 "config": [] 00:20:55.342 } 00:20:55.342 ] 00:20:55.342 }' 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1988493 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1988493 ']' 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1988493 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988493 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988493' 00:20:55.342 killing process with pid 1988493 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1988493 00:20:55.342 Received shutdown signal, test time was about 1.000000 seconds 00:20:55.342 00:20:55.342 Latency(us) 00:20:55.342 [2024-11-20T16:03:47.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.342 [2024-11-20T16:03:47.518Z] =================================================================================================================== 00:20:55.342 [2024-11-20T16:03:47.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1988493 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1988424 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1988424 ']' 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1988424 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.342 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988424 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988424' 00:20:55.603 killing process with pid 1988424 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1988424 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1988424 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.603 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:55.603 "subsystems": [ 00:20:55.603 { 00:20:55.603 "subsystem": "keyring", 00:20:55.603 "config": [ 00:20:55.603 { 00:20:55.603 "method": "keyring_file_add_key", 00:20:55.603 "params": { 00:20:55.603 "name": "key0", 00:20:55.603 "path": "/tmp/tmp.VjbBbOvcST" 00:20:55.603 } 00:20:55.603 } 00:20:55.603 ] 00:20:55.603 }, 00:20:55.603 { 00:20:55.603 "subsystem": "iobuf", 00:20:55.603 "config": [ 00:20:55.603 { 00:20:55.603 "method": "iobuf_set_options", 00:20:55.603 "params": { 00:20:55.603 "small_pool_count": 8192, 00:20:55.603 "large_pool_count": 1024, 00:20:55.603 "small_bufsize": 8192, 00:20:55.603 "large_bufsize": 135168, 00:20:55.603 "enable_numa": false 00:20:55.603 } 00:20:55.603 } 00:20:55.603 ] 00:20:55.603 }, 00:20:55.603 { 00:20:55.603 "subsystem": "sock", 00:20:55.604 "config": [ 00:20:55.604 { 00:20:55.604 "method": "sock_set_default_impl", 00:20:55.604 "params": { 00:20:55.604 "impl_name": "posix" 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "sock_impl_set_options", 00:20:55.604 "params": { 00:20:55.604 "impl_name": "ssl", 00:20:55.604 "recv_buf_size": 4096, 00:20:55.604 "send_buf_size": 4096, 00:20:55.604 "enable_recv_pipe": true, 00:20:55.604 "enable_quickack": false, 00:20:55.604 "enable_placement_id": 0, 00:20:55.604 "enable_zerocopy_send_server": true, 00:20:55.604 "enable_zerocopy_send_client": false, 00:20:55.604 "zerocopy_threshold": 0, 00:20:55.604 "tls_version": 0, 00:20:55.604 "enable_ktls": false 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "sock_impl_set_options", 00:20:55.604 "params": { 00:20:55.604 "impl_name": "posix", 00:20:55.604 "recv_buf_size": 2097152, 00:20:55.604 "send_buf_size": 2097152, 00:20:55.604 "enable_recv_pipe": true, 00:20:55.604 "enable_quickack": false, 00:20:55.604 "enable_placement_id": 0, 00:20:55.604 "enable_zerocopy_send_server": true, 00:20:55.604 "enable_zerocopy_send_client": false, 00:20:55.604 "zerocopy_threshold": 0, 00:20:55.604 "tls_version": 0, 00:20:55.604 "enable_ktls": false 00:20:55.604 } 00:20:55.604 } 00:20:55.604 ] 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "subsystem": "vmd", 00:20:55.604 "config": [] 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "subsystem": "accel", 00:20:55.604 "config": [ 00:20:55.604 { 00:20:55.604 "method": "accel_set_options", 00:20:55.604 "params": { 00:20:55.604 "small_cache_size": 128, 00:20:55.604 "large_cache_size": 16, 00:20:55.604 "task_count": 2048, 00:20:55.604 "sequence_count": 2048, 00:20:55.604 "buf_count": 2048 00:20:55.604 } 00:20:55.604 } 00:20:55.604 ] 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "subsystem": "bdev", 00:20:55.604 "config": [ 00:20:55.604 { 00:20:55.604 "method": "bdev_set_options", 00:20:55.604 "params": { 00:20:55.604 "bdev_io_pool_size": 65535, 00:20:55.604 "bdev_io_cache_size": 256, 00:20:55.604 "bdev_auto_examine": true, 00:20:55.604 "iobuf_small_cache_size": 128, 00:20:55.604 "iobuf_large_cache_size": 16 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "bdev_raid_set_options", 00:20:55.604 "params": { 00:20:55.604 "process_window_size_kb": 1024, 00:20:55.604 "process_max_bandwidth_mb_sec": 0 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "bdev_iscsi_set_options", 00:20:55.604 "params": { 00:20:55.604 "timeout_sec": 30 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "bdev_nvme_set_options", 00:20:55.604 "params": { 00:20:55.604 "action_on_timeout": "none", 00:20:55.604 "timeout_us": 0, 00:20:55.604 "timeout_admin_us": 0, 00:20:55.604 "keep_alive_timeout_ms": 10000, 00:20:55.604 "arbitration_burst": 0, 00:20:55.604 "low_priority_weight": 0, 00:20:55.604 "medium_priority_weight": 0, 00:20:55.604 "high_priority_weight": 0, 00:20:55.604 "nvme_adminq_poll_period_us": 10000, 00:20:55.604 "nvme_ioq_poll_period_us": 0, 00:20:55.604 "io_queue_requests": 0, 00:20:55.604 "delay_cmd_submit": true, 00:20:55.604 "transport_retry_count": 4, 00:20:55.604 "bdev_retry_count": 3, 00:20:55.604 "transport_ack_timeout": 0, 00:20:55.604 "ctrlr_loss_timeout_sec": 0, 00:20:55.604 "reconnect_delay_sec": 0, 00:20:55.604 "fast_io_fail_timeout_sec": 0, 00:20:55.604 "disable_auto_failback": false, 00:20:55.604 "generate_uuids": false, 00:20:55.604 "transport_tos": 0, 00:20:55.604 "nvme_error_stat": false, 00:20:55.604 "rdma_srq_size": 0, 00:20:55.604 "io_path_stat": false, 00:20:55.604 "allow_accel_sequence": false, 00:20:55.604 "rdma_max_cq_size": 0, 00:20:55.604 "rdma_cm_event_timeout_ms": 0, 00:20:55.604 "dhchap_digests": [ 00:20:55.604 "sha256", 00:20:55.604 "sha384", 00:20:55.604 "sha512" 00:20:55.604 ], 00:20:55.604 "dhchap_dhgroups": [ 00:20:55.604 "null", 00:20:55.604 "ffdhe2048", 00:20:55.604 "ffdhe3072", 00:20:55.604 "ffdhe4096", 00:20:55.604 "ffdhe6144", 00:20:55.604 "ffdhe8192" 00:20:55.604 ] 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "bdev_nvme_set_hotplug", 00:20:55.604 "params": { 00:20:55.604 "period_us": 100000, 00:20:55.604 "enable": false 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "bdev_malloc_create", 00:20:55.604 "params": { 00:20:55.604 "name": "malloc0", 00:20:55.604 "num_blocks": 8192, 00:20:55.604 "block_size": 4096, 00:20:55.604 "physical_block_size": 4096, 00:20:55.604 "uuid": "14019061-c34c-4fe5-848b-72abcdf46188", 00:20:55.604 "optimal_io_boundary": 0, 00:20:55.604 "md_size": 0, 00:20:55.604 "dif_type": 0, 00:20:55.604 "dif_is_head_of_md": false, 00:20:55.604 "dif_pi_format": 0 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "bdev_wait_for_examine" 00:20:55.604 } 00:20:55.604 ] 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "subsystem": "nbd", 00:20:55.604 "config": [] 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "subsystem": "scheduler", 00:20:55.604 "config": [ 00:20:55.604 { 00:20:55.604 "method": "framework_set_scheduler", 00:20:55.604 "params": { 00:20:55.604 "name": "static" 00:20:55.604 } 00:20:55.604 } 00:20:55.604 ] 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "subsystem": "nvmf", 00:20:55.604 "config": [ 00:20:55.604 { 00:20:55.604 "method": "nvmf_set_config", 00:20:55.604 "params": { 00:20:55.604 "discovery_filter": "match_any", 00:20:55.604 "admin_cmd_passthru": { 00:20:55.604 "identify_ctrlr": false 00:20:55.604 }, 00:20:55.604 "dhchap_digests": [ 00:20:55.604 "sha256", 00:20:55.604 "sha384", 00:20:55.604 "sha512" 00:20:55.604 ], 00:20:55.604 "dhchap_dhgroups": [ 00:20:55.604 "null", 00:20:55.604 "ffdhe2048", 00:20:55.604 "ffdhe3072", 00:20:55.604 "ffdhe4096", 00:20:55.604 "ffdhe6144", 00:20:55.604 "ffdhe8192" 00:20:55.604 ] 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_set_max_subsystems", 00:20:55.604 "params": { 00:20:55.604 "max_subsystems": 1024 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_set_crdt", 00:20:55.604 "params": { 00:20:55.604 "crdt1": 0, 00:20:55.604 "crdt2": 0, 00:20:55.604 "crdt3": 0 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_create_transport", 00:20:55.604 "params": { 00:20:55.604 "trtype": "TCP", 00:20:55.604 "max_queue_depth": 128, 00:20:55.604 "max_io_qpairs_per_ctrlr": 127, 00:20:55.604 "in_capsule_data_size": 4096, 00:20:55.604 "max_io_size": 131072, 00:20:55.604 "io_unit_size": 131072, 00:20:55.604 "max_aq_depth": 128, 00:20:55.604 "num_shared_buffers": 511, 00:20:55.604 "buf_cache_size": 4294967295, 00:20:55.604 "dif_insert_or_strip": false, 00:20:55.604 "zcopy": false, 00:20:55.604 "c2h_success": false, 00:20:55.604 "sock_priority": 0, 00:20:55.604 "abort_timeout_sec": 1, 00:20:55.604 "ack_timeout": 0, 00:20:55.604 "data_wr_pool_size": 0 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_create_subsystem", 00:20:55.604 "params": { 00:20:55.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.604 "allow_any_host": false, 00:20:55.604 "serial_number": "00000000000000000000", 00:20:55.604 "model_number": "SPDK bdev Controller", 00:20:55.604 "max_namespaces": 32, 00:20:55.604 "min_cntlid": 1, 00:20:55.604 "max_cntlid": 65519, 00:20:55.604 "ana_reporting": false 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_subsystem_add_host", 00:20:55.604 "params": { 00:20:55.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.604 "host": "nqn.2016-06.io.spdk:host1", 00:20:55.604 "psk": "key0" 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_subsystem_add_ns", 00:20:55.604 "params": { 00:20:55.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.604 "namespace": { 00:20:55.604 "nsid": 1, 00:20:55.604 "bdev_name": "malloc0", 00:20:55.604 "nguid": "14019061C34C4FE5848B72ABCDF46188", 00:20:55.604 "uuid": "14019061-c34c-4fe5-848b-72abcdf46188", 00:20:55.604 "no_auto_visible": false 00:20:55.604 } 00:20:55.604 } 00:20:55.604 }, 00:20:55.604 { 00:20:55.604 "method": "nvmf_subsystem_add_listener", 00:20:55.604 "params": { 00:20:55.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.604 "listen_address": { 00:20:55.604 "trtype": "TCP", 00:20:55.604 "adrfam": "IPv4", 00:20:55.604 "traddr": "10.0.0.2", 00:20:55.604 "trsvcid": "4420" 00:20:55.604 }, 00:20:55.604 "secure_channel": false, 00:20:55.604 "sock_impl": "ssl" 00:20:55.604 } 00:20:55.604 } 00:20:55.604 ] 00:20:55.604 } 00:20:55.604 ] 00:20:55.605 }' 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1989179 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1989179 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1989179 ']' 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.605 17:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.605 [2024-11-20 17:03:47.742950] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:55.605 [2024-11-20 17:03:47.743008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.864 [2024-11-20 17:03:47.831218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.865 [2024-11-20 17:03:47.860403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.865 [2024-11-20 17:03:47.860429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.865 [2024-11-20 17:03:47.860434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.865 [2024-11-20 17:03:47.860439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.865 [2024-11-20 17:03:47.860443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.865 [2024-11-20 17:03:47.860930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.125 [2024-11-20 17:03:48.054733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.125 [2024-11-20 17:03:48.086766] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.125 [2024-11-20 17:03:48.086970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.386 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.386 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.386 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.386 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.386 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1989365 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1989365 /var/tmp/bdevperf.sock 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1989365 ']' 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 17:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:56.647 "subsystems": [ 00:20:56.647 { 00:20:56.647 "subsystem": "keyring", 00:20:56.647 "config": [ 00:20:56.647 { 00:20:56.647 "method": "keyring_file_add_key", 00:20:56.647 "params": { 00:20:56.647 "name": "key0", 00:20:56.647 "path": "/tmp/tmp.VjbBbOvcST" 00:20:56.647 } 00:20:56.647 } 00:20:56.647 ] 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "subsystem": "iobuf", 00:20:56.648 "config": [ 00:20:56.648 { 00:20:56.648 "method": "iobuf_set_options", 00:20:56.648 "params": { 00:20:56.648 "small_pool_count": 8192, 00:20:56.648 "large_pool_count": 1024, 00:20:56.648 "small_bufsize": 8192, 00:20:56.648 "large_bufsize": 135168, 00:20:56.648 "enable_numa": false 00:20:56.648 } 00:20:56.648 } 00:20:56.648 ] 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "subsystem": "sock", 00:20:56.648 "config": [ 00:20:56.648 { 00:20:56.648 "method": "sock_set_default_impl", 00:20:56.648 "params": { 00:20:56.648 "impl_name": "posix" 00:20:56.648 } 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "method": "sock_impl_set_options", 00:20:56.648 "params": { 00:20:56.648 "impl_name": "ssl", 00:20:56.648 "recv_buf_size": 4096, 00:20:56.648 "send_buf_size": 4096, 00:20:56.648 "enable_recv_pipe": true, 00:20:56.648 "enable_quickack": false, 00:20:56.648 "enable_placement_id": 0, 00:20:56.648 "enable_zerocopy_send_server": true, 00:20:56.648 "enable_zerocopy_send_client": false, 00:20:56.648 "zerocopy_threshold": 0, 00:20:56.648 "tls_version": 0, 00:20:56.648 "enable_ktls": false 00:20:56.648 } 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "method": "sock_impl_set_options", 00:20:56.648 "params": { 00:20:56.648 "impl_name": "posix", 00:20:56.648 "recv_buf_size": 2097152, 00:20:56.648 "send_buf_size": 2097152, 00:20:56.648 "enable_recv_pipe": true, 00:20:56.648 "enable_quickack": false, 00:20:56.648 "enable_placement_id": 0, 00:20:56.648 "enable_zerocopy_send_server": true, 00:20:56.648 "enable_zerocopy_send_client": false, 00:20:56.648 "zerocopy_threshold": 0, 00:20:56.648 "tls_version": 0, 00:20:56.648 "enable_ktls": false 00:20:56.648 } 00:20:56.648 } 00:20:56.648 ] 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "subsystem": "vmd", 00:20:56.648 "config": [] 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "subsystem": "accel", 00:20:56.648 "config": [ 00:20:56.648 { 00:20:56.648 "method": "accel_set_options", 00:20:56.648 "params": { 00:20:56.648 "small_cache_size": 128, 00:20:56.648 "large_cache_size": 16, 00:20:56.648 "task_count": 2048, 00:20:56.648 "sequence_count": 2048, 00:20:56.648 "buf_count": 2048 00:20:56.648 } 00:20:56.648 } 00:20:56.648 ] 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "subsystem": "bdev", 00:20:56.648 "config": [ 00:20:56.648 { 00:20:56.648 "method": "bdev_set_options", 00:20:56.648 "params": { 00:20:56.648 "bdev_io_pool_size": 65535, 00:20:56.648 "bdev_io_cache_size": 256, 00:20:56.648 "bdev_auto_examine": true, 00:20:56.648 "iobuf_small_cache_size": 128, 00:20:56.648 "iobuf_large_cache_size": 16 00:20:56.648 } 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "method": "bdev_raid_set_options", 00:20:56.648 "params": { 00:20:56.648 "process_window_size_kb": 1024, 00:20:56.648 "process_max_bandwidth_mb_sec": 0 00:20:56.648 } 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "method": "bdev_iscsi_set_options", 00:20:56.648 "params": { 00:20:56.648 "timeout_sec": 30 00:20:56.648 } 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "method": "bdev_nvme_set_options", 00:20:56.648 "params": { 00:20:56.648 "action_on_timeout": "none", 00:20:56.648 "timeout_us": 0, 00:20:56.648 "timeout_admin_us": 0, 00:20:56.648 "keep_alive_timeout_ms": 10000, 00:20:56.648 "arbitration_burst": 0, 00:20:56.648 "low_priority_weight": 0, 00:20:56.648 "medium_priority_weight": 0, 00:20:56.648 "high_priority_weight": 0, 00:20:56.648 "nvme_adminq_poll_period_us": 10000, 00:20:56.648 "nvme_ioq_poll_period_us": 0, 00:20:56.648 "io_queue_requests": 512, 00:20:56.648 "delay_cmd_submit": true, 00:20:56.648 "transport_retry_count": 4, 00:20:56.648 "bdev_retry_count": 3, 00:20:56.648 "transport_ack_timeout": 0, 00:20:56.648 "ctrlr_loss_timeout_sec": 0, 00:20:56.648 "reconnect_delay_sec": 0, 00:20:56.648 "fast_io_fail_timeout_sec": 0, 00:20:56.648 "disable_auto_failback": false, 00:20:56.648 "generate_uuids": false, 00:20:56.648 "transport_tos": 0, 00:20:56.648 "nvme_error_stat": false, 00:20:56.648 "rdma_srq_size": 0, 00:20:56.648 "io_path_stat": false, 00:20:56.648 "allow_accel_sequence": false, 00:20:56.648 "rdma_max_cq_size": 0, 00:20:56.648 "rdma_cm_event_timeout_ms": 0, 00:20:56.648 "dhchap_digests": [ 00:20:56.648 "sha256", 00:20:56.648 "sha384", 00:20:56.648 "sha512" 00:20:56.648 ], 00:20:56.648 "dhchap_dhgroups": [ 00:20:56.648 "null", 00:20:56.648 "ffdhe2048", 00:20:56.648 "ffdhe3072", 00:20:56.648 "ffdhe4096", 00:20:56.648 "ffdhe6144", 00:20:56.648 "ffdhe8192" 00:20:56.648 ] 00:20:56.648 } 00:20:56.648 }, 00:20:56.648 { 00:20:56.648 "method": "bdev_nvme_attach_controller", 00:20:56.648 "params": { 00:20:56.648 "name": "nvme0", 00:20:56.648 "trtype": "TCP", 00:20:56.648 "adrfam": "IPv4", 00:20:56.648 "traddr": "10.0.0.2", 00:20:56.648 "trsvcid": "4420", 00:20:56.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.648 "prchk_reftag": false, 00:20:56.648 "prchk_guard": false, 00:20:56.649 "ctrlr_loss_timeout_sec": 0, 00:20:56.649 "reconnect_delay_sec": 0, 00:20:56.649 "fast_io_fail_timeout_sec": 0, 00:20:56.649 "psk": "key0", 00:20:56.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.649 "hdgst": false, 00:20:56.649 "ddgst": false, 00:20:56.649 "multipath": "multipath" 00:20:56.649 } 00:20:56.649 }, 00:20:56.649 { 00:20:56.649 "method": "bdev_nvme_set_hotplug", 00:20:56.649 "params": { 00:20:56.649 "period_us": 100000, 00:20:56.649 "enable": false 00:20:56.649 } 00:20:56.649 }, 00:20:56.649 { 00:20:56.649 "method": "bdev_enable_histogram", 00:20:56.649 "params": { 00:20:56.649 "name": "nvme0n1", 00:20:56.649 "enable": true 00:20:56.649 } 00:20:56.649 }, 00:20:56.649 { 00:20:56.649 "method": "bdev_wait_for_examine" 00:20:56.649 } 00:20:56.649 ] 00:20:56.649 }, 00:20:56.649 { 00:20:56.649 "subsystem": "nbd", 00:20:56.649 "config": [] 00:20:56.649 } 00:20:56.649 ] 00:20:56.649 }' 00:20:56.649 [2024-11-20 17:03:48.614414] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:20:56.649 [2024-11-20 17:03:48.614468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989365 ] 00:20:56.649 [2024-11-20 17:03:48.699921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.649 [2024-11-20 17:03:48.729618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.909 [2024-11-20 17:03:48.865536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.477 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.477 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.477 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:57.477 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:57.477 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.477 17:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.737 Running I/O for 1 seconds... 00:20:58.732 6000.00 IOPS, 23.44 MiB/s 00:20:58.732 Latency(us) 00:20:58.732 [2024-11-20T16:03:50.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.732 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:58.732 Verification LBA range: start 0x0 length 0x2000 00:20:58.733 nvme0n1 : 1.01 6050.35 23.63 0.00 0.00 21021.16 4805.97 26760.53 00:20:58.733 [2024-11-20T16:03:50.909Z] =================================================================================================================== 00:20:58.733 [2024-11-20T16:03:50.909Z] Total : 6050.35 23.63 0.00 0.00 21021.16 4805.97 26760.53 00:20:58.733 { 00:20:58.733 "results": [ 00:20:58.733 { 00:20:58.733 "job": "nvme0n1", 00:20:58.733 "core_mask": "0x2", 00:20:58.733 "workload": "verify", 00:20:58.733 "status": "finished", 00:20:58.733 "verify_range": { 00:20:58.733 "start": 0, 00:20:58.733 "length": 8192 00:20:58.733 }, 00:20:58.733 "queue_depth": 128, 00:20:58.733 "io_size": 4096, 00:20:58.733 "runtime": 1.013, 00:20:58.733 "iops": 6050.345508390918, 00:20:58.733 "mibps": 23.634162142152025, 00:20:58.733 "io_failed": 0, 00:20:58.733 "io_timeout": 0, 00:20:58.733 "avg_latency_us": 21021.163421982925, 00:20:58.733 "min_latency_us": 4805.973333333333, 00:20:58.733 "max_latency_us": 26760.533333333333 00:20:58.733 } 00:20:58.733 ], 00:20:58.733 "core_count": 1 00:20:58.733 } 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:58.733 nvmf_trace.0 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1989365 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1989365 ']' 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1989365 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.733 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1989365 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1989365' 00:20:58.993 killing process with pid 1989365 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1989365 00:20:58.993 Received shutdown signal, test time was about 1.000000 seconds 00:20:58.993 00:20:58.993 Latency(us) 00:20:58.993 [2024-11-20T16:03:51.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.993 [2024-11-20T16:03:51.169Z] =================================================================================================================== 00:20:58.993 [2024-11-20T16:03:51.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1989365 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.993 17:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:58.993 rmmod nvme_tcp 00:20:58.993 rmmod nvme_fabrics 00:20:58.994 rmmod nvme_keyring 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1989179 ']' 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1989179 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1989179 ']' 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1989179 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1989179 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1989179' 00:20:58.994 killing process with pid 1989179 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1989179 00:20:58.994 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1989179 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.254 17:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.164 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:01.164 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yOKupUGIDA /tmp/tmp.lhPVUQi0tq /tmp/tmp.VjbBbOvcST 00:21:01.164 00:21:01.164 real 1m27.497s 00:21:01.164 user 2m18.282s 00:21:01.164 sys 0m26.801s 00:21:01.164 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.164 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.164 ************************************ 00:21:01.164 END TEST nvmf_tls 00:21:01.164 ************************************ 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.425 ************************************ 00:21:01.425 START TEST nvmf_fips 00:21:01.425 ************************************ 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:01.425 * Looking for test storage... 00:21:01.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:01.425 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.687 --rc genhtml_branch_coverage=1 00:21:01.687 --rc genhtml_function_coverage=1 00:21:01.687 --rc genhtml_legend=1 00:21:01.687 --rc geninfo_all_blocks=1 00:21:01.687 --rc geninfo_unexecuted_blocks=1 00:21:01.687 00:21:01.687 ' 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.687 --rc genhtml_branch_coverage=1 00:21:01.687 --rc genhtml_function_coverage=1 00:21:01.687 --rc genhtml_legend=1 00:21:01.687 --rc geninfo_all_blocks=1 00:21:01.687 --rc geninfo_unexecuted_blocks=1 00:21:01.687 00:21:01.687 ' 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.687 --rc genhtml_branch_coverage=1 00:21:01.687 --rc genhtml_function_coverage=1 00:21:01.687 --rc genhtml_legend=1 00:21:01.687 --rc geninfo_all_blocks=1 00:21:01.687 --rc geninfo_unexecuted_blocks=1 00:21:01.687 00:21:01.687 ' 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.687 --rc genhtml_branch_coverage=1 00:21:01.687 --rc genhtml_function_coverage=1 00:21:01.687 --rc genhtml_legend=1 00:21:01.687 --rc geninfo_all_blocks=1 00:21:01.687 --rc geninfo_unexecuted_blocks=1 00:21:01.687 00:21:01.687 ' 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.687 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:01.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:01.688 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:01.689 Error setting digest 00:21:01.689 40C210B0327F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:01.689 40C210B0327F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.689 17:03:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:09.826 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:09.826 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:09.826 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:09.826 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:09.826 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.827 17:04:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:21:09.827 00:21:09.827 --- 10.0.0.2 ping statistics --- 00:21:09.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.827 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:21:09.827 00:21:09.827 --- 10.0.0.1 ping statistics --- 00:21:09.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.827 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1994164 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1994164 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1994164 ']' 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.827 17:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:09.827 [2024-11-20 17:04:01.393818] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:21:09.827 [2024-11-20 17:04:01.393891] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.827 [2024-11-20 17:04:01.493301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.827 [2024-11-20 17:04:01.544503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.827 [2024-11-20 17:04:01.544550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.827 [2024-11-20 17:04:01.544559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.827 [2024-11-20 17:04:01.544566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.827 [2024-11-20 17:04:01.544573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.827 [2024-11-20 17:04:01.545215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xed 00:21:10.087 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:10.348 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xed 00:21:10.348 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xed 00:21:10.348 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xed 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.349 [2024-11-20 17:04:02.430078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.349 [2024-11-20 17:04:02.446079] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.349 [2024-11-20 17:04:02.446422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.349 malloc0 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1994276 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1994276 /var/tmp/bdevperf.sock 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1994276 ']' 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.349 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.609 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.609 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.609 17:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 [2024-11-20 17:04:02.591914] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:21:10.609 [2024-11-20 17:04:02.591983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994276 ] 00:21:10.609 [2024-11-20 17:04:02.686184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.609 [2024-11-20 17:04:02.737193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.551 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.551 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:11.551 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xed 00:21:11.551 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.811 [2024-11-20 17:04:03.760731] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.811 TLSTESTn1 00:21:11.811 17:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.811 Running I/O for 10 seconds... 00:21:14.134 3425.00 IOPS, 13.38 MiB/s [2024-11-20T16:04:07.252Z] 4098.00 IOPS, 16.01 MiB/s [2024-11-20T16:04:08.301Z] 4343.33 IOPS, 16.97 MiB/s [2024-11-20T16:04:09.242Z] 4753.00 IOPS, 18.57 MiB/s [2024-11-20T16:04:10.185Z] 4981.60 IOPS, 19.46 MiB/s [2024-11-20T16:04:11.125Z] 5010.33 IOPS, 19.57 MiB/s [2024-11-20T16:04:12.067Z] 5059.57 IOPS, 19.76 MiB/s [2024-11-20T16:04:13.011Z] 5171.88 IOPS, 20.20 MiB/s [2024-11-20T16:04:14.398Z] 5226.67 IOPS, 20.42 MiB/s [2024-11-20T16:04:14.398Z] 5205.50 IOPS, 20.33 MiB/s 00:21:22.222 Latency(us) 00:21:22.222 [2024-11-20T16:04:14.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.222 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:22.222 Verification LBA range: start 0x0 length 0x2000 00:21:22.222 TLSTESTn1 : 10.02 5206.25 20.34 0.00 0.00 24542.79 5952.85 41943.04 00:21:22.222 [2024-11-20T16:04:14.398Z] =================================================================================================================== 00:21:22.222 [2024-11-20T16:04:14.398Z] Total : 5206.25 20.34 0.00 0.00 24542.79 5952.85 41943.04 00:21:22.222 { 00:21:22.222 "results": [ 00:21:22.222 { 00:21:22.222 "job": "TLSTESTn1", 00:21:22.222 "core_mask": "0x4", 00:21:22.222 "workload": "verify", 00:21:22.222 "status": "finished", 00:21:22.222 "verify_range": { 00:21:22.222 "start": 0, 00:21:22.222 "length": 8192 00:21:22.222 }, 00:21:22.222 "queue_depth": 128, 00:21:22.222 "io_size": 4096, 00:21:22.222 "runtime": 10.022954, 00:21:22.222 "iops": 5206.249574726174, 00:21:22.222 "mibps": 20.336912401274116, 00:21:22.222 "io_failed": 0, 00:21:22.222 "io_timeout": 0, 00:21:22.222 "avg_latency_us": 24542.792640118558, 00:21:22.222 "min_latency_us": 5952.8533333333335, 00:21:22.222 "max_latency_us": 41943.04 00:21:22.222 } 00:21:22.222 ], 00:21:22.222 "core_count": 1 00:21:22.222 } 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:22.222 nvmf_trace.0 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1994276 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1994276 ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1994276 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994276 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994276' 00:21:22.222 killing process with pid 1994276 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1994276 00:21:22.222 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.222 00:21:22.222 Latency(us) 00:21:22.222 [2024-11-20T16:04:14.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.222 [2024-11-20T16:04:14.398Z] =================================================================================================================== 00:21:22.222 [2024-11-20T16:04:14.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1994276 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.222 rmmod nvme_tcp 00:21:22.222 rmmod nvme_fabrics 00:21:22.222 rmmod nvme_keyring 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1994164 ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1994164 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1994164 ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1994164 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.222 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994164 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994164' 00:21:22.484 killing process with pid 1994164 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1994164 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1994164 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.484 17:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xed 00:21:25.028 00:21:25.028 real 0m23.227s 00:21:25.028 user 0m24.926s 00:21:25.028 sys 0m9.718s 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:25.028 ************************************ 00:21:25.028 END TEST nvmf_fips 00:21:25.028 ************************************ 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.028 ************************************ 00:21:25.028 START TEST nvmf_control_msg_list 00:21:25.028 ************************************ 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:25.028 * Looking for test storage... 00:21:25.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:25.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.028 --rc genhtml_branch_coverage=1 00:21:25.028 --rc genhtml_function_coverage=1 00:21:25.028 --rc genhtml_legend=1 00:21:25.028 --rc geninfo_all_blocks=1 00:21:25.028 --rc geninfo_unexecuted_blocks=1 00:21:25.028 00:21:25.028 ' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:25.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.028 --rc genhtml_branch_coverage=1 00:21:25.028 --rc genhtml_function_coverage=1 00:21:25.028 --rc genhtml_legend=1 00:21:25.028 --rc geninfo_all_blocks=1 00:21:25.028 --rc geninfo_unexecuted_blocks=1 00:21:25.028 00:21:25.028 ' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:25.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.028 --rc genhtml_branch_coverage=1 00:21:25.028 --rc genhtml_function_coverage=1 00:21:25.028 --rc genhtml_legend=1 00:21:25.028 --rc geninfo_all_blocks=1 00:21:25.028 --rc geninfo_unexecuted_blocks=1 00:21:25.028 00:21:25.028 ' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:25.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.028 --rc genhtml_branch_coverage=1 00:21:25.028 --rc genhtml_function_coverage=1 00:21:25.028 --rc genhtml_legend=1 00:21:25.028 --rc geninfo_all_blocks=1 00:21:25.028 --rc geninfo_unexecuted_blocks=1 00:21:25.028 00:21:25.028 ' 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.028 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.029 17:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:33.167 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:33.167 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:33.167 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:33.167 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:33.167 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:33.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:21:33.168 00:21:33.168 --- 10.0.0.2 ping statistics --- 00:21:33.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.168 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:33.168 00:21:33.168 --- 10.0.0.1 ping statistics --- 00:21:33.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.168 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2000912 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2000912 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2000912 ']' 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.168 17:04:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.168 [2024-11-20 17:04:24.552650] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:21:33.168 [2024-11-20 17:04:24.552750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.168 [2024-11-20 17:04:24.657261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.168 [2024-11-20 17:04:24.708299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.168 [2024-11-20 17:04:24.708351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.168 [2024-11-20 17:04:24.708361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.168 [2024-11-20 17:04:24.708368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.168 [2024-11-20 17:04:24.708375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.168 [2024-11-20 17:04:24.709144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.430 [2024-11-20 17:04:25.428275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.430 Malloc0 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:33.430 [2024-11-20 17:04:25.482688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2000978 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2000979 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2000980 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2000978 00:21:33.430 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:33.430 [2024-11-20 17:04:25.583192] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:33.430 [2024-11-20 17:04:25.593260] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:33.430 [2024-11-20 17:04:25.593575] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:34.819 Initializing NVMe Controllers 00:21:34.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:34.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:34.819 Initialization complete. Launching workers. 00:21:34.819 ======================================================== 00:21:34.819 Latency(us) 00:21:34.819 Device Information : IOPS MiB/s Average min max 00:21:34.819 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1569.00 6.13 637.05 164.96 955.49 00:21:34.819 ======================================================== 00:21:34.819 Total : 1569.00 6.13 637.05 164.96 955.49 00:21:34.819 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2000979 00:21:34.819 Initializing NVMe Controllers 00:21:34.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:34.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:34.819 Initialization complete. Launching workers. 00:21:34.819 ======================================================== 00:21:34.819 Latency(us) 00:21:34.819 Device Information : IOPS MiB/s Average min max 00:21:34.819 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41672.92 40678.03 42080.87 00:21:34.819 ======================================================== 00:21:34.819 Total : 24.00 0.09 41672.92 40678.03 42080.87 00:21:34.819 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2000980 00:21:34.819 Initializing NVMe Controllers 00:21:34.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:34.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:34.819 Initialization complete. Launching workers. 00:21:34.819 ======================================================== 00:21:34.819 Latency(us) 00:21:34.819 Device Information : IOPS MiB/s Average min max 00:21:34.819 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41652.27 40754.79 42046.31 00:21:34.819 ======================================================== 00:21:34.819 Total : 25.00 0.10 41652.27 40754.79 42046.31 00:21:34.819 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.819 rmmod nvme_tcp 00:21:34.819 rmmod nvme_fabrics 00:21:34.819 rmmod nvme_keyring 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2000912 ']' 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2000912 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2000912 ']' 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2000912 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2000912 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2000912' 00:21:34.819 killing process with pid 2000912 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2000912 00:21:34.819 17:04:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2000912 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.081 17:04:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.630 00:21:37.630 real 0m12.483s 00:21:37.630 user 0m8.122s 00:21:37.630 sys 0m6.599s 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 ************************************ 00:21:37.630 END TEST nvmf_control_msg_list 00:21:37.630 ************************************ 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 ************************************ 00:21:37.630 START TEST nvmf_wait_for_buf 00:21:37.630 ************************************ 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:37.630 * Looking for test storage... 00:21:37.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.630 --rc genhtml_branch_coverage=1 00:21:37.630 --rc genhtml_function_coverage=1 00:21:37.630 --rc genhtml_legend=1 00:21:37.630 --rc geninfo_all_blocks=1 00:21:37.630 --rc geninfo_unexecuted_blocks=1 00:21:37.630 00:21:37.630 ' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.630 --rc genhtml_branch_coverage=1 00:21:37.630 --rc genhtml_function_coverage=1 00:21:37.630 --rc genhtml_legend=1 00:21:37.630 --rc geninfo_all_blocks=1 00:21:37.630 --rc geninfo_unexecuted_blocks=1 00:21:37.630 00:21:37.630 ' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.630 --rc genhtml_branch_coverage=1 00:21:37.630 --rc genhtml_function_coverage=1 00:21:37.630 --rc genhtml_legend=1 00:21:37.630 --rc geninfo_all_blocks=1 00:21:37.630 --rc geninfo_unexecuted_blocks=1 00:21:37.630 00:21:37.630 ' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:37.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.630 --rc genhtml_branch_coverage=1 00:21:37.630 --rc genhtml_function_coverage=1 00:21:37.630 --rc genhtml_legend=1 00:21:37.630 --rc geninfo_all_blocks=1 00:21:37.630 --rc geninfo_unexecuted_blocks=1 00:21:37.630 00:21:37.630 ' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.630 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.631 17:04:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.774 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.774 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.774 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.774 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.774 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:21:45.775 00:21:45.775 --- 10.0.0.2 ping statistics --- 00:21:45.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.775 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:21:45.775 00:21:45.775 --- 10.0.0.1 ping statistics --- 00:21:45.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.775 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:45.775 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2005536 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2005536 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2005536 ']' 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.775 [2024-11-20 17:04:37.111478] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:21:45.775 [2024-11-20 17:04:37.111543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.775 [2024-11-20 17:04:37.212858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.775 [2024-11-20 17:04:37.263593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.775 [2024-11-20 17:04:37.263648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.775 [2024-11-20 17:04:37.263657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.775 [2024-11-20 17:04:37.263665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.775 [2024-11-20 17:04:37.263672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.775 [2024-11-20 17:04:37.264497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.775 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 Malloc0 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 [2024-11-20 17:04:38.106665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 [2024-11-20 17:04:38.142994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.298 [2024-11-20 17:04:38.256303] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:47.682 Initializing NVMe Controllers 00:21:47.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:47.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:47.682 Initialization complete. Launching workers. 00:21:47.682 ======================================================== 00:21:47.682 Latency(us) 00:21:47.682 Device Information : IOPS MiB/s Average min max 00:21:47.682 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 167273.44 47867.03 199533.97 00:21:47.682 ======================================================== 00:21:47.682 Total : 25.00 3.12 167273.44 47867.03 199533.97 00:21:47.682 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.682 rmmod nvme_tcp 00:21:47.682 rmmod nvme_fabrics 00:21:47.682 rmmod nvme_keyring 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2005536 ']' 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2005536 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2005536 ']' 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2005536 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2005536 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.682 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.683 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2005536' 00:21:47.683 killing process with pid 2005536 00:21:47.683 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2005536 00:21:47.683 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2005536 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.943 17:04:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.491 00:21:50.491 real 0m12.795s 00:21:50.491 user 0m5.202s 00:21:50.491 sys 0m6.204s 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.491 ************************************ 00:21:50.491 END TEST nvmf_wait_for_buf 00:21:50.491 ************************************ 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.491 17:04:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.076 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.077 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.077 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.338 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.338 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:57.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:57.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:57.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:57.339 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.339 ************************************ 00:21:57.339 START TEST nvmf_perf_adq 00:21:57.339 ************************************ 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:57.339 * Looking for test storage... 00:21:57.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:57.339 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:57.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.340 --rc genhtml_branch_coverage=1 00:21:57.340 --rc genhtml_function_coverage=1 00:21:57.340 --rc genhtml_legend=1 00:21:57.340 --rc geninfo_all_blocks=1 00:21:57.340 --rc geninfo_unexecuted_blocks=1 00:21:57.340 00:21:57.340 ' 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:57.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.340 --rc genhtml_branch_coverage=1 00:21:57.340 --rc genhtml_function_coverage=1 00:21:57.340 --rc genhtml_legend=1 00:21:57.340 --rc geninfo_all_blocks=1 00:21:57.340 --rc geninfo_unexecuted_blocks=1 00:21:57.340 00:21:57.340 ' 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:57.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.340 --rc genhtml_branch_coverage=1 00:21:57.340 --rc genhtml_function_coverage=1 00:21:57.340 --rc genhtml_legend=1 00:21:57.340 --rc geninfo_all_blocks=1 00:21:57.340 --rc geninfo_unexecuted_blocks=1 00:21:57.340 00:21:57.340 ' 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:57.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.340 --rc genhtml_branch_coverage=1 00:21:57.340 --rc genhtml_function_coverage=1 00:21:57.340 --rc genhtml_legend=1 00:21:57.340 --rc geninfo_all_blocks=1 00:21:57.340 --rc geninfo_unexecuted_blocks=1 00:21:57.340 00:21:57.340 ' 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.340 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.602 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.747 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.747 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.747 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:05.748 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:06.008 17:04:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:08.554 17:05:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.845 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:13.846 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:13.846 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:13.846 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:13.846 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:22:13.846 00:22:13.846 --- 10.0.0.2 ping statistics --- 00:22:13.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.846 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:22:13.846 00:22:13.846 --- 10.0.0.1 ping statistics --- 00:22:13.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.846 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.846 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2015606 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2015606 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2015606 ']' 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.847 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.847 [2024-11-20 17:05:05.678080] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:22:13.847 [2024-11-20 17:05:05.678144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.847 [2024-11-20 17:05:05.778050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.847 [2024-11-20 17:05:05.833313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.847 [2024-11-20 17:05:05.833363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.847 [2024-11-20 17:05:05.833371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.847 [2024-11-20 17:05:05.833379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.847 [2024-11-20 17:05:05.833386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.847 [2024-11-20 17:05:05.835408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.847 [2024-11-20 17:05:05.835568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.847 [2024-11-20 17:05:05.835730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.847 [2024-11-20 17:05:05.835731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.420 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 [2024-11-20 17:05:06.705551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 Malloc1 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.681 [2024-11-20 17:05:06.778892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2015908 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:14.681 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:17.230 "tick_rate": 2400000000, 00:22:17.230 "poll_groups": [ 00:22:17.230 { 00:22:17.230 "name": "nvmf_tgt_poll_group_000", 00:22:17.230 "admin_qpairs": 1, 00:22:17.230 "io_qpairs": 1, 00:22:17.230 "current_admin_qpairs": 1, 00:22:17.230 "current_io_qpairs": 1, 00:22:17.230 "pending_bdev_io": 0, 00:22:17.230 "completed_nvme_io": 16261, 00:22:17.230 "transports": [ 00:22:17.230 { 00:22:17.230 "trtype": "TCP" 00:22:17.230 } 00:22:17.230 ] 00:22:17.230 }, 00:22:17.230 { 00:22:17.230 "name": "nvmf_tgt_poll_group_001", 00:22:17.230 "admin_qpairs": 0, 00:22:17.230 "io_qpairs": 1, 00:22:17.230 "current_admin_qpairs": 0, 00:22:17.230 "current_io_qpairs": 1, 00:22:17.230 "pending_bdev_io": 0, 00:22:17.230 "completed_nvme_io": 17506, 00:22:17.230 "transports": [ 00:22:17.230 { 00:22:17.230 "trtype": "TCP" 00:22:17.230 } 00:22:17.230 ] 00:22:17.230 }, 00:22:17.230 { 00:22:17.230 "name": "nvmf_tgt_poll_group_002", 00:22:17.230 "admin_qpairs": 0, 00:22:17.230 "io_qpairs": 1, 00:22:17.230 "current_admin_qpairs": 0, 00:22:17.230 "current_io_qpairs": 1, 00:22:17.230 "pending_bdev_io": 0, 00:22:17.230 "completed_nvme_io": 16971, 00:22:17.230 "transports": [ 00:22:17.230 { 00:22:17.230 "trtype": "TCP" 00:22:17.230 } 00:22:17.230 ] 00:22:17.230 }, 00:22:17.230 { 00:22:17.230 "name": "nvmf_tgt_poll_group_003", 00:22:17.230 "admin_qpairs": 0, 00:22:17.230 "io_qpairs": 1, 00:22:17.230 "current_admin_qpairs": 0, 00:22:17.230 "current_io_qpairs": 1, 00:22:17.230 "pending_bdev_io": 0, 00:22:17.230 "completed_nvme_io": 16262, 00:22:17.230 "transports": [ 00:22:17.230 { 00:22:17.230 "trtype": "TCP" 00:22:17.230 } 00:22:17.230 ] 00:22:17.230 } 00:22:17.230 ] 00:22:17.230 }' 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:17.230 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2015908 00:22:25.369 Initializing NVMe Controllers 00:22:25.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:25.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:25.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:25.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:25.369 Initialization complete. Launching workers. 00:22:25.369 ======================================================== 00:22:25.369 Latency(us) 00:22:25.369 Device Information : IOPS MiB/s Average min max 00:22:25.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13417.60 52.41 4770.27 1270.63 10934.73 00:22:25.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13161.70 51.41 4863.08 1283.90 13034.23 00:22:25.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13284.90 51.89 4816.85 1240.46 14432.33 00:22:25.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12676.20 49.52 5049.60 1634.29 13418.88 00:22:25.369 ======================================================== 00:22:25.369 Total : 52540.40 205.24 4872.69 1240.46 14432.33 00:22:25.369 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.369 rmmod nvme_tcp 00:22:25.369 rmmod nvme_fabrics 00:22:25.369 rmmod nvme_keyring 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:25.369 17:05:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2015606 ']' 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2015606 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2015606 ']' 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2015606 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015606 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015606' 00:22:25.369 killing process with pid 2015606 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2015606 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2015606 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.369 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.285 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.285 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:27.285 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:27.285 17:05:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:29.210 17:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:31.290 17:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.591 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:36.592 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:36.592 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:36.592 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:36.592 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.592 17:05:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:22:36.592 00:22:36.592 --- 10.0.0.2 ping statistics --- 00:22:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.592 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:22:36.592 00:22:36.592 --- 10.0.0.1 ping statistics --- 00:22:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.592 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:36.592 net.core.busy_poll = 1 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:36.592 net.core.busy_read = 1 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.592 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2020394 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2020394 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2020394 ']' 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.593 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.593 [2024-11-20 17:05:28.658952] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:22:36.593 [2024-11-20 17:05:28.659016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.853 [2024-11-20 17:05:28.763988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.853 [2024-11-20 17:05:28.817880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.853 [2024-11-20 17:05:28.817938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.853 [2024-11-20 17:05:28.817947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.853 [2024-11-20 17:05:28.817955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.853 [2024-11-20 17:05:28.817961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.853 [2024-11-20 17:05:28.820380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.853 [2024-11-20 17:05:28.820540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.853 [2024-11-20 17:05:28.820700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.853 [2024-11-20 17:05:28.820700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.426 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.687 [2024-11-20 17:05:29.682097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.687 Malloc1 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.687 [2024-11-20 17:05:29.755013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2020725 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:37.687 17:05:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.603 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:39.603 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.603 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:39.863 "tick_rate": 2400000000, 00:22:39.863 "poll_groups": [ 00:22:39.863 { 00:22:39.863 "name": "nvmf_tgt_poll_group_000", 00:22:39.863 "admin_qpairs": 1, 00:22:39.863 "io_qpairs": 3, 00:22:39.863 "current_admin_qpairs": 1, 00:22:39.863 "current_io_qpairs": 3, 00:22:39.863 "pending_bdev_io": 0, 00:22:39.863 "completed_nvme_io": 27280, 00:22:39.863 "transports": [ 00:22:39.863 { 00:22:39.863 "trtype": "TCP" 00:22:39.863 } 00:22:39.863 ] 00:22:39.863 }, 00:22:39.863 { 00:22:39.863 "name": "nvmf_tgt_poll_group_001", 00:22:39.863 "admin_qpairs": 0, 00:22:39.863 "io_qpairs": 1, 00:22:39.863 "current_admin_qpairs": 0, 00:22:39.863 "current_io_qpairs": 1, 00:22:39.863 "pending_bdev_io": 0, 00:22:39.863 "completed_nvme_io": 25844, 00:22:39.863 "transports": [ 00:22:39.863 { 00:22:39.863 "trtype": "TCP" 00:22:39.863 } 00:22:39.863 ] 00:22:39.863 }, 00:22:39.863 { 00:22:39.863 "name": "nvmf_tgt_poll_group_002", 00:22:39.863 "admin_qpairs": 0, 00:22:39.863 "io_qpairs": 0, 00:22:39.863 "current_admin_qpairs": 0, 00:22:39.863 "current_io_qpairs": 0, 00:22:39.863 "pending_bdev_io": 0, 00:22:39.863 "completed_nvme_io": 0, 00:22:39.863 "transports": [ 00:22:39.863 { 00:22:39.863 "trtype": "TCP" 00:22:39.863 } 00:22:39.863 ] 00:22:39.863 }, 00:22:39.863 { 00:22:39.863 "name": "nvmf_tgt_poll_group_003", 00:22:39.863 "admin_qpairs": 0, 00:22:39.863 "io_qpairs": 0, 00:22:39.863 "current_admin_qpairs": 0, 00:22:39.863 "current_io_qpairs": 0, 00:22:39.863 "pending_bdev_io": 0, 00:22:39.863 "completed_nvme_io": 0, 00:22:39.863 "transports": [ 00:22:39.863 { 00:22:39.863 "trtype": "TCP" 00:22:39.863 } 00:22:39.863 ] 00:22:39.863 } 00:22:39.863 ] 00:22:39.863 }' 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:39.863 17:05:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2020725 00:22:48.005 Initializing NVMe Controllers 00:22:48.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:48.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:48.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:48.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:48.005 Initialization complete. Launching workers. 00:22:48.005 ======================================================== 00:22:48.005 Latency(us) 00:22:48.005 Device Information : IOPS MiB/s Average min max 00:22:48.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5612.70 21.92 11415.30 1204.71 59976.62 00:22:48.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7156.00 27.95 8970.66 1174.64 64048.95 00:22:48.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 17220.90 67.27 3716.09 922.52 45689.35 00:22:48.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6748.50 26.36 9513.13 1183.26 56798.25 00:22:48.005 ======================================================== 00:22:48.005 Total : 36738.09 143.51 6980.72 922.52 64048.95 00:22:48.005 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.005 rmmod nvme_tcp 00:22:48.005 rmmod nvme_fabrics 00:22:48.005 rmmod nvme_keyring 00:22:48.005 17:05:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2020394 ']' 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2020394 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2020394 ']' 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2020394 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020394 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020394' 00:22:48.005 killing process with pid 2020394 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2020394 00:22:48.005 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2020394 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.266 17:05:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:51.567 00:22:51.567 real 0m53.986s 00:22:51.567 user 2m49.248s 00:22:51.567 sys 0m11.852s 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.567 ************************************ 00:22:51.567 END TEST nvmf_perf_adq 00:22:51.567 ************************************ 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.567 ************************************ 00:22:51.567 START TEST nvmf_shutdown 00:22:51.567 ************************************ 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:51.567 * Looking for test storage... 00:22:51.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.567 --rc genhtml_branch_coverage=1 00:22:51.567 --rc genhtml_function_coverage=1 00:22:51.567 --rc genhtml_legend=1 00:22:51.567 --rc geninfo_all_blocks=1 00:22:51.567 --rc geninfo_unexecuted_blocks=1 00:22:51.567 00:22:51.567 ' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.567 --rc genhtml_branch_coverage=1 00:22:51.567 --rc genhtml_function_coverage=1 00:22:51.567 --rc genhtml_legend=1 00:22:51.567 --rc geninfo_all_blocks=1 00:22:51.567 --rc geninfo_unexecuted_blocks=1 00:22:51.567 00:22:51.567 ' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.567 --rc genhtml_branch_coverage=1 00:22:51.567 --rc genhtml_function_coverage=1 00:22:51.567 --rc genhtml_legend=1 00:22:51.567 --rc geninfo_all_blocks=1 00:22:51.567 --rc geninfo_unexecuted_blocks=1 00:22:51.567 00:22:51.567 ' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.567 --rc genhtml_branch_coverage=1 00:22:51.567 --rc genhtml_function_coverage=1 00:22:51.567 --rc genhtml_legend=1 00:22:51.567 --rc geninfo_all_blocks=1 00:22:51.567 --rc geninfo_unexecuted_blocks=1 00:22:51.567 00:22:51.567 ' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.567 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.568 ************************************ 00:22:51.568 START TEST nvmf_shutdown_tc1 00:22:51.568 ************************************ 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.568 17:05:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.710 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.711 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.711 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.711 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.711 17:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:22:59.711 00:22:59.711 --- 10.0.0.2 ping statistics --- 00:22:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.711 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:22:59.711 00:22:59.711 --- 10.0.0.1 ping statistics --- 00:22:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.711 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2027196 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2027196 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2027196 ']' 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.711 17:05:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.711 [2024-11-20 17:05:51.342001] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:22:59.711 [2024-11-20 17:05:51.342066] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.711 [2024-11-20 17:05:51.443903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.712 [2024-11-20 17:05:51.496366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.712 [2024-11-20 17:05:51.496413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.712 [2024-11-20 17:05:51.496422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.712 [2024-11-20 17:05:51.496430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.712 [2024-11-20 17:05:51.496436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.712 [2024-11-20 17:05:51.498815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.712 [2024-11-20 17:05:51.498979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.712 [2024-11-20 17:05:51.499145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.712 [2024-11-20 17:05:51.499145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.285 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.286 [2024-11-20 17:05:52.225020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.286 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.286 Malloc1 00:23:00.286 [2024-11-20 17:05:52.360459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.286 Malloc2 00:23:00.286 Malloc3 00:23:00.546 Malloc4 00:23:00.546 Malloc5 00:23:00.546 Malloc6 00:23:00.546 Malloc7 00:23:00.546 Malloc8 00:23:00.546 Malloc9 00:23:00.809 Malloc10 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2027575 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2027575 /var/tmp/bdevperf.sock 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2027575 ']' 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.809 "hdgst": ${hdgst:-false}, 00:23:00.809 "ddgst": ${ddgst:-false} 00:23:00.809 }, 00:23:00.809 "method": "bdev_nvme_attach_controller" 00:23:00.809 } 00:23:00.809 EOF 00:23:00.809 )") 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.809 "hdgst": ${hdgst:-false}, 00:23:00.809 "ddgst": ${ddgst:-false} 00:23:00.809 }, 00:23:00.809 "method": "bdev_nvme_attach_controller" 00:23:00.809 } 00:23:00.809 EOF 00:23:00.809 )") 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.809 "hdgst": ${hdgst:-false}, 00:23:00.809 "ddgst": ${ddgst:-false} 00:23:00.809 }, 00:23:00.809 "method": "bdev_nvme_attach_controller" 00:23:00.809 } 00:23:00.809 EOF 00:23:00.809 )") 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.809 "hdgst": ${hdgst:-false}, 00:23:00.809 "ddgst": ${ddgst:-false} 00:23:00.809 }, 00:23:00.809 "method": "bdev_nvme_attach_controller" 00:23:00.809 } 00:23:00.809 EOF 00:23:00.809 )") 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.809 "hdgst": ${hdgst:-false}, 00:23:00.809 "ddgst": ${ddgst:-false} 00:23:00.809 }, 00:23:00.809 "method": "bdev_nvme_attach_controller" 00:23:00.809 } 00:23:00.809 EOF 00:23:00.809 )") 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.809 "hdgst": ${hdgst:-false}, 00:23:00.809 "ddgst": ${ddgst:-false} 00:23:00.809 }, 00:23:00.809 "method": "bdev_nvme_attach_controller" 00:23:00.809 } 00:23:00.809 EOF 00:23:00.809 )") 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.809 [2024-11-20 17:05:52.871601] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:00.809 [2024-11-20 17:05:52.871672] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.809 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.809 { 00:23:00.809 "params": { 00:23:00.809 "name": "Nvme$subsystem", 00:23:00.809 "trtype": "$TEST_TRANSPORT", 00:23:00.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.809 "adrfam": "ipv4", 00:23:00.809 "trsvcid": "$NVMF_PORT", 00:23:00.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.810 "hdgst": ${hdgst:-false}, 00:23:00.810 "ddgst": ${ddgst:-false} 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 } 00:23:00.810 EOF 00:23:00.810 )") 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.810 { 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme$subsystem", 00:23:00.810 "trtype": "$TEST_TRANSPORT", 00:23:00.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "$NVMF_PORT", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.810 "hdgst": ${hdgst:-false}, 00:23:00.810 "ddgst": ${ddgst:-false} 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 } 00:23:00.810 EOF 00:23:00.810 )") 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.810 { 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme$subsystem", 00:23:00.810 "trtype": "$TEST_TRANSPORT", 00:23:00.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "$NVMF_PORT", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.810 "hdgst": ${hdgst:-false}, 00:23:00.810 "ddgst": ${ddgst:-false} 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 } 00:23:00.810 EOF 00:23:00.810 )") 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.810 { 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme$subsystem", 00:23:00.810 "trtype": "$TEST_TRANSPORT", 00:23:00.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "$NVMF_PORT", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.810 "hdgst": ${hdgst:-false}, 00:23:00.810 "ddgst": ${ddgst:-false} 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 } 00:23:00.810 EOF 00:23:00.810 )") 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:00.810 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme1", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme2", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme3", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme4", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme5", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme6", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme7", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme8", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme9", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 },{ 00:23:00.810 "params": { 00:23:00.810 "name": "Nvme10", 00:23:00.810 "trtype": "tcp", 00:23:00.810 "traddr": "10.0.0.2", 00:23:00.810 "adrfam": "ipv4", 00:23:00.810 "trsvcid": "4420", 00:23:00.810 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.810 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.810 "hdgst": false, 00:23:00.810 "ddgst": false 00:23:00.810 }, 00:23:00.810 "method": "bdev_nvme_attach_controller" 00:23:00.810 }' 00:23:00.810 [2024-11-20 17:05:52.967841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.071 [2024-11-20 17:05:53.020749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2027575 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:02.456 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:03.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2027575 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:03.399 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2027196 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 [2024-11-20 17:05:55.415783] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:03.400 [2024-11-20 17:05:55.415837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2028003 ] 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.400 "hdgst": ${hdgst:-false}, 00:23:03.400 "ddgst": ${ddgst:-false} 00:23:03.400 }, 00:23:03.400 "method": "bdev_nvme_attach_controller" 00:23:03.400 } 00:23:03.400 EOF 00:23:03.400 )") 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.400 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.400 { 00:23:03.400 "params": { 00:23:03.400 "name": "Nvme$subsystem", 00:23:03.400 "trtype": "$TEST_TRANSPORT", 00:23:03.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.400 "adrfam": "ipv4", 00:23:03.400 "trsvcid": "$NVMF_PORT", 00:23:03.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.401 "hdgst": ${hdgst:-false}, 00:23:03.401 "ddgst": ${ddgst:-false} 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 } 00:23:03.401 EOF 00:23:03.401 )") 00:23:03.401 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.401 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:03.401 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:03.401 17:05:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme1", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme2", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme3", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme4", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme5", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme6", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme7", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme8", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme9", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 },{ 00:23:03.401 "params": { 00:23:03.401 "name": "Nvme10", 00:23:03.401 "trtype": "tcp", 00:23:03.401 "traddr": "10.0.0.2", 00:23:03.401 "adrfam": "ipv4", 00:23:03.401 "trsvcid": "4420", 00:23:03.401 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.401 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.401 "hdgst": false, 00:23:03.401 "ddgst": false 00:23:03.401 }, 00:23:03.401 "method": "bdev_nvme_attach_controller" 00:23:03.401 }' 00:23:03.401 [2024-11-20 17:05:55.505244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.401 [2024-11-20 17:05:55.542046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.786 Running I/O for 1 seconds... 00:23:05.990 1861.00 IOPS, 116.31 MiB/s 00:23:05.990 Latency(us) 00:23:05.990 [2024-11-20T16:05:58.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.990 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.990 Verification LBA range: start 0x0 length 0x400 00:23:05.990 Nvme1n1 : 1.08 237.34 14.83 0.00 0.00 265845.55 16930.13 253405.87 00:23:05.990 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.990 Verification LBA range: start 0x0 length 0x400 00:23:05.990 Nvme2n1 : 1.09 235.69 14.73 0.00 0.00 263729.07 19442.35 216705.71 00:23:05.990 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.990 Verification LBA range: start 0x0 length 0x400 00:23:05.990 Nvme3n1 : 1.09 234.64 14.67 0.00 0.00 259679.79 19879.25 222822.40 00:23:05.990 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.990 Verification LBA range: start 0x0 length 0x400 00:23:05.990 Nvme4n1 : 1.10 232.96 14.56 0.00 0.00 257309.23 20097.71 251658.24 00:23:05.990 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.990 Verification LBA range: start 0x0 length 0x400 00:23:05.991 Nvme5n1 : 1.09 233.85 14.62 0.00 0.00 251216.21 30146.56 242920.11 00:23:05.991 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.991 Verification LBA range: start 0x0 length 0x400 00:23:05.991 Nvme6n1 : 1.15 221.74 13.86 0.00 0.00 261422.93 16930.13 253405.87 00:23:05.991 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.991 Verification LBA range: start 0x0 length 0x400 00:23:05.991 Nvme7n1 : 1.16 276.44 17.28 0.00 0.00 205887.49 19114.67 241172.48 00:23:05.991 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.991 Verification LBA range: start 0x0 length 0x400 00:23:05.991 Nvme8n1 : 1.19 269.48 16.84 0.00 0.00 207832.06 16820.91 249910.61 00:23:05.991 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.991 Verification LBA range: start 0x0 length 0x400 00:23:05.991 Nvme9n1 : 1.19 272.44 17.03 0.00 0.00 201899.70 1631.57 242920.11 00:23:05.991 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.991 Verification LBA range: start 0x0 length 0x400 00:23:05.991 Nvme10n1 : 1.20 265.96 16.62 0.00 0.00 203327.49 9994.24 281367.89 00:23:05.991 [2024-11-20T16:05:58.167Z] =================================================================================================================== 00:23:05.991 [2024-11-20T16:05:58.167Z] Total : 2480.52 155.03 0.00 0.00 234749.51 1631.57 281367.89 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.251 rmmod nvme_tcp 00:23:06.251 rmmod nvme_fabrics 00:23:06.251 rmmod nvme_keyring 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2027196 ']' 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2027196 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2027196 ']' 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2027196 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027196 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027196' 00:23:06.251 killing process with pid 2027196 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2027196 00:23:06.251 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2027196 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.512 17:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.059 00:23:09.059 real 0m17.014s 00:23:09.059 user 0m34.406s 00:23:09.059 sys 0m7.016s 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.059 ************************************ 00:23:09.059 END TEST nvmf_shutdown_tc1 00:23:09.059 ************************************ 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.059 ************************************ 00:23:09.059 START TEST nvmf_shutdown_tc2 00:23:09.059 ************************************ 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.059 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.060 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.060 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.060 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.060 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.060 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.061 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.061 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.061 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.061 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:23:09.061 00:23:09.061 --- 10.0.0.2 ping statistics --- 00:23:09.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.061 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:23:09.061 00:23:09.061 --- 10.0.0.1 ping statistics --- 00:23:09.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.061 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2029367 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2029367 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2029367 ']' 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.061 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.061 [2024-11-20 17:06:01.194258] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:09.061 [2024-11-20 17:06:01.194324] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.321 [2024-11-20 17:06:01.289170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.321 [2024-11-20 17:06:01.327280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.321 [2024-11-20 17:06:01.327312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.321 [2024-11-20 17:06:01.327319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.321 [2024-11-20 17:06:01.327324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.321 [2024-11-20 17:06:01.327329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.321 [2024-11-20 17:06:01.328927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.321 [2024-11-20 17:06:01.329265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.321 [2024-11-20 17:06:01.329496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.321 [2024-11-20 17:06:01.329497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.891 [2024-11-20 17:06:02.050359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.891 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.152 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.152 Malloc1 00:23:10.152 [2024-11-20 17:06:02.164339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.152 Malloc2 00:23:10.152 Malloc3 00:23:10.152 Malloc4 00:23:10.152 Malloc5 00:23:10.412 Malloc6 00:23:10.412 Malloc7 00:23:10.412 Malloc8 00:23:10.412 Malloc9 00:23:10.412 Malloc10 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2029646 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2029646 /var/tmp/bdevperf.sock 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2029646 ']' 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.412 { 00:23:10.412 "params": { 00:23:10.412 "name": "Nvme$subsystem", 00:23:10.412 "trtype": "$TEST_TRANSPORT", 00:23:10.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.412 "adrfam": "ipv4", 00:23:10.412 "trsvcid": "$NVMF_PORT", 00:23:10.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.412 "hdgst": ${hdgst:-false}, 00:23:10.412 "ddgst": ${ddgst:-false} 00:23:10.412 }, 00:23:10.412 "method": "bdev_nvme_attach_controller" 00:23:10.412 } 00:23:10.412 EOF 00:23:10.412 )") 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.412 { 00:23:10.412 "params": { 00:23:10.412 "name": "Nvme$subsystem", 00:23:10.412 "trtype": "$TEST_TRANSPORT", 00:23:10.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.412 "adrfam": "ipv4", 00:23:10.412 "trsvcid": "$NVMF_PORT", 00:23:10.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.412 "hdgst": ${hdgst:-false}, 00:23:10.412 "ddgst": ${ddgst:-false} 00:23:10.412 }, 00:23:10.412 "method": "bdev_nvme_attach_controller" 00:23:10.412 } 00:23:10.412 EOF 00:23:10.412 )") 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.412 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.412 { 00:23:10.412 "params": { 00:23:10.412 "name": "Nvme$subsystem", 00:23:10.412 "trtype": "$TEST_TRANSPORT", 00:23:10.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.412 "adrfam": "ipv4", 00:23:10.412 "trsvcid": "$NVMF_PORT", 00:23:10.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.413 "hdgst": ${hdgst:-false}, 00:23:10.413 "ddgst": ${ddgst:-false} 00:23:10.413 }, 00:23:10.413 "method": "bdev_nvme_attach_controller" 00:23:10.413 } 00:23:10.413 EOF 00:23:10.413 )") 00:23:10.413 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.673 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.673 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.673 { 00:23:10.673 "params": { 00:23:10.673 "name": "Nvme$subsystem", 00:23:10.673 "trtype": "$TEST_TRANSPORT", 00:23:10.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.673 "adrfam": "ipv4", 00:23:10.673 "trsvcid": "$NVMF_PORT", 00:23:10.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.673 "hdgst": ${hdgst:-false}, 00:23:10.673 "ddgst": ${ddgst:-false} 00:23:10.673 }, 00:23:10.673 "method": "bdev_nvme_attach_controller" 00:23:10.673 } 00:23:10.673 EOF 00:23:10.673 )") 00:23:10.673 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.673 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.673 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.673 { 00:23:10.673 "params": { 00:23:10.673 "name": "Nvme$subsystem", 00:23:10.673 "trtype": "$TEST_TRANSPORT", 00:23:10.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "$NVMF_PORT", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.674 "hdgst": ${hdgst:-false}, 00:23:10.674 "ddgst": ${ddgst:-false} 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 } 00:23:10.674 EOF 00:23:10.674 )") 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.674 { 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme$subsystem", 00:23:10.674 "trtype": "$TEST_TRANSPORT", 00:23:10.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "$NVMF_PORT", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.674 "hdgst": ${hdgst:-false}, 00:23:10.674 "ddgst": ${ddgst:-false} 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 } 00:23:10.674 EOF 00:23:10.674 )") 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.674 { 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme$subsystem", 00:23:10.674 "trtype": "$TEST_TRANSPORT", 00:23:10.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "$NVMF_PORT", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.674 "hdgst": ${hdgst:-false}, 00:23:10.674 "ddgst": ${ddgst:-false} 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 } 00:23:10.674 EOF 00:23:10.674 )") 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.674 { 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme$subsystem", 00:23:10.674 "trtype": "$TEST_TRANSPORT", 00:23:10.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "$NVMF_PORT", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.674 "hdgst": ${hdgst:-false}, 00:23:10.674 "ddgst": ${ddgst:-false} 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 } 00:23:10.674 EOF 00:23:10.674 )") 00:23:10.674 [2024-11-20 17:06:02.622770] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:10.674 [2024-11-20 17:06:02.622837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029646 ] 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.674 { 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme$subsystem", 00:23:10.674 "trtype": "$TEST_TRANSPORT", 00:23:10.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "$NVMF_PORT", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.674 "hdgst": ${hdgst:-false}, 00:23:10.674 "ddgst": ${ddgst:-false} 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 } 00:23:10.674 EOF 00:23:10.674 )") 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.674 { 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme$subsystem", 00:23:10.674 "trtype": "$TEST_TRANSPORT", 00:23:10.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "$NVMF_PORT", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.674 "hdgst": ${hdgst:-false}, 00:23:10.674 "ddgst": ${ddgst:-false} 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 } 00:23:10.674 EOF 00:23:10.674 )") 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:10.674 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme1", 00:23:10.674 "trtype": "tcp", 00:23:10.674 "traddr": "10.0.0.2", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "4420", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.674 "hdgst": false, 00:23:10.674 "ddgst": false 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 },{ 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme2", 00:23:10.674 "trtype": "tcp", 00:23:10.674 "traddr": "10.0.0.2", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "4420", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.674 "hdgst": false, 00:23:10.674 "ddgst": false 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 },{ 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme3", 00:23:10.674 "trtype": "tcp", 00:23:10.674 "traddr": "10.0.0.2", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "4420", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.674 "hdgst": false, 00:23:10.674 "ddgst": false 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 },{ 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme4", 00:23:10.674 "trtype": "tcp", 00:23:10.674 "traddr": "10.0.0.2", 00:23:10.674 "adrfam": "ipv4", 00:23:10.674 "trsvcid": "4420", 00:23:10.674 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.674 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.674 "hdgst": false, 00:23:10.674 "ddgst": false 00:23:10.674 }, 00:23:10.674 "method": "bdev_nvme_attach_controller" 00:23:10.674 },{ 00:23:10.674 "params": { 00:23:10.674 "name": "Nvme5", 00:23:10.674 "trtype": "tcp", 00:23:10.674 "traddr": "10.0.0.2", 00:23:10.674 "adrfam": "ipv4", 00:23:10.675 "trsvcid": "4420", 00:23:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.675 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.675 "hdgst": false, 00:23:10.675 "ddgst": false 00:23:10.675 }, 00:23:10.675 "method": "bdev_nvme_attach_controller" 00:23:10.675 },{ 00:23:10.675 "params": { 00:23:10.675 "name": "Nvme6", 00:23:10.675 "trtype": "tcp", 00:23:10.675 "traddr": "10.0.0.2", 00:23:10.675 "adrfam": "ipv4", 00:23:10.675 "trsvcid": "4420", 00:23:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.675 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.675 "hdgst": false, 00:23:10.675 "ddgst": false 00:23:10.675 }, 00:23:10.675 "method": "bdev_nvme_attach_controller" 00:23:10.675 },{ 00:23:10.675 "params": { 00:23:10.675 "name": "Nvme7", 00:23:10.675 "trtype": "tcp", 00:23:10.675 "traddr": "10.0.0.2", 00:23:10.675 "adrfam": "ipv4", 00:23:10.675 "trsvcid": "4420", 00:23:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.675 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.675 "hdgst": false, 00:23:10.675 "ddgst": false 00:23:10.675 }, 00:23:10.675 "method": "bdev_nvme_attach_controller" 00:23:10.675 },{ 00:23:10.675 "params": { 00:23:10.675 "name": "Nvme8", 00:23:10.675 "trtype": "tcp", 00:23:10.675 "traddr": "10.0.0.2", 00:23:10.675 "adrfam": "ipv4", 00:23:10.675 "trsvcid": "4420", 00:23:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.675 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.675 "hdgst": false, 00:23:10.675 "ddgst": false 00:23:10.675 }, 00:23:10.675 "method": "bdev_nvme_attach_controller" 00:23:10.675 },{ 00:23:10.675 "params": { 00:23:10.675 "name": "Nvme9", 00:23:10.675 "trtype": "tcp", 00:23:10.675 "traddr": "10.0.0.2", 00:23:10.675 "adrfam": "ipv4", 00:23:10.675 "trsvcid": "4420", 00:23:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.675 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.675 "hdgst": false, 00:23:10.675 "ddgst": false 00:23:10.675 }, 00:23:10.675 "method": "bdev_nvme_attach_controller" 00:23:10.675 },{ 00:23:10.675 "params": { 00:23:10.675 "name": "Nvme10", 00:23:10.675 "trtype": "tcp", 00:23:10.675 "traddr": "10.0.0.2", 00:23:10.675 "adrfam": "ipv4", 00:23:10.675 "trsvcid": "4420", 00:23:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.675 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.675 "hdgst": false, 00:23:10.675 "ddgst": false 00:23:10.675 }, 00:23:10.675 "method": "bdev_nvme_attach_controller" 00:23:10.675 }' 00:23:10.675 [2024-11-20 17:06:02.712396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.675 [2024-11-20 17:06:02.748746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.062 Running I/O for 10 seconds... 00:23:12.062 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.062 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:12.062 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:12.062 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.062 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:12.324 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:12.584 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2029646 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2029646 ']' 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2029646 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.845 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2029646 00:23:13.105 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.105 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.106 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2029646' 00:23:13.106 killing process with pid 2029646 00:23:13.106 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2029646 00:23:13.106 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2029646 00:23:13.106 Received shutdown signal, test time was about 0.978299 seconds 00:23:13.106 00:23:13.106 Latency(us) 00:23:13.106 [2024-11-20T16:06:05.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.106 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme1n1 : 0.98 261.93 16.37 0.00 0.00 241502.93 14964.05 227191.47 00:23:13.106 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme2n1 : 0.95 213.17 13.32 0.00 0.00 287881.75 5843.63 246415.36 00:23:13.106 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme3n1 : 0.96 267.36 16.71 0.00 0.00 226802.77 19114.67 230686.72 00:23:13.106 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme4n1 : 0.97 264.74 16.55 0.00 0.00 224293.12 18131.63 242920.11 00:23:13.106 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme5n1 : 0.95 202.90 12.68 0.00 0.00 285643.09 18350.08 253405.87 00:23:13.106 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme6n1 : 0.97 263.76 16.49 0.00 0.00 215411.20 17585.49 251658.24 00:23:13.106 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme7n1 : 0.96 265.50 16.59 0.00 0.00 208915.63 17257.81 249910.61 00:23:13.106 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme8n1 : 0.97 266.97 16.69 0.00 0.00 203292.20 1856.85 237677.23 00:23:13.106 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme9n1 : 0.95 201.13 12.57 0.00 0.00 262574.65 19551.57 248162.99 00:23:13.106 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.106 Verification LBA range: start 0x0 length 0x400 00:23:13.106 Nvme10n1 : 0.96 199.90 12.49 0.00 0.00 258116.55 19442.35 272629.76 00:23:13.106 [2024-11-20T16:06:05.282Z] =================================================================================================================== 00:23:13.106 [2024-11-20T16:06:05.282Z] Total : 2407.37 150.46 0.00 0.00 238031.59 1856.85 272629.76 00:23:13.106 17:06:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2029367 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.080 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.080 rmmod nvme_tcp 00:23:14.340 rmmod nvme_fabrics 00:23:14.340 rmmod nvme_keyring 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2029367 ']' 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2029367 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2029367 ']' 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2029367 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2029367 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2029367' 00:23:14.340 killing process with pid 2029367 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2029367 00:23:14.340 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2029367 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.599 17:06:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.509 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:16.509 00:23:16.509 real 0m7.924s 00:23:16.509 user 0m23.912s 00:23:16.509 sys 0m1.326s 00:23:16.509 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.509 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:16.509 ************************************ 00:23:16.509 END TEST nvmf_shutdown_tc2 00:23:16.509 ************************************ 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.771 ************************************ 00:23:16.771 START TEST nvmf_shutdown_tc3 00:23:16.771 ************************************ 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.771 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.772 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.772 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.033 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.034 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.034 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.034 17:06:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:23:17.034 00:23:17.034 --- 10.0.0.2 ping statistics --- 00:23:17.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.034 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:23:17.034 00:23:17.034 --- 10.0.0.1 ping statistics --- 00:23:17.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.034 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2031118 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2031118 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2031118 ']' 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.034 17:06:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.294 [2024-11-20 17:06:09.211918] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:17.294 [2024-11-20 17:06:09.211985] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.294 [2024-11-20 17:06:09.306632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.294 [2024-11-20 17:06:09.341027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.294 [2024-11-20 17:06:09.341059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.294 [2024-11-20 17:06:09.341065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.294 [2024-11-20 17:06:09.341070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.294 [2024-11-20 17:06:09.341074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.294 [2024-11-20 17:06:09.342407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.294 [2024-11-20 17:06:09.342561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.294 [2024-11-20 17:06:09.342712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.294 [2024-11-20 17:06:09.342714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:17.864 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.864 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:17.864 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:17.864 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.865 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.126 [2024-11-20 17:06:10.070728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.126 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.127 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.127 Malloc1 00:23:18.127 [2024-11-20 17:06:10.185850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.127 Malloc2 00:23:18.127 Malloc3 00:23:18.127 Malloc4 00:23:18.387 Malloc5 00:23:18.387 Malloc6 00:23:18.387 Malloc7 00:23:18.387 Malloc8 00:23:18.387 Malloc9 00:23:18.387 Malloc10 00:23:18.387 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.387 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:18.387 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.387 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.647 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2031770 00:23:18.647 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2031770 /var/tmp/bdevperf.sock 00:23:18.647 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2031770 ']' 00:23:18.647 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.647 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 [2024-11-20 17:06:10.634398] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:18.648 [2024-11-20 17:06:10.634452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031770 ] 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.648 "hdgst": ${hdgst:-false}, 00:23:18.648 "ddgst": ${ddgst:-false} 00:23:18.648 }, 00:23:18.648 "method": "bdev_nvme_attach_controller" 00:23:18.648 } 00:23:18.648 EOF 00:23:18.648 )") 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.648 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.648 { 00:23:18.648 "params": { 00:23:18.648 "name": "Nvme$subsystem", 00:23:18.648 "trtype": "$TEST_TRANSPORT", 00:23:18.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.648 "adrfam": "ipv4", 00:23:18.648 "trsvcid": "$NVMF_PORT", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.649 "hdgst": ${hdgst:-false}, 00:23:18.649 "ddgst": ${ddgst:-false} 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 } 00:23:18.649 EOF 00:23:18.649 )") 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.649 { 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme$subsystem", 00:23:18.649 "trtype": "$TEST_TRANSPORT", 00:23:18.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "$NVMF_PORT", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.649 "hdgst": ${hdgst:-false}, 00:23:18.649 "ddgst": ${ddgst:-false} 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 } 00:23:18.649 EOF 00:23:18.649 )") 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:18.649 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme1", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme2", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme3", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme4", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme5", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme6", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme7", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme8", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme9", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 },{ 00:23:18.649 "params": { 00:23:18.649 "name": "Nvme10", 00:23:18.649 "trtype": "tcp", 00:23:18.649 "traddr": "10.0.0.2", 00:23:18.649 "adrfam": "ipv4", 00:23:18.649 "trsvcid": "4420", 00:23:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.649 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.649 "hdgst": false, 00:23:18.649 "ddgst": false 00:23:18.649 }, 00:23:18.649 "method": "bdev_nvme_attach_controller" 00:23:18.649 }' 00:23:18.649 [2024-11-20 17:06:10.724466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.649 [2024-11-20 17:06:10.760796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.559 Running I/O for 10 seconds... 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2031118 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2031118 ']' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2031118 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031118 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031118' 00:23:21.130 killing process with pid 2031118 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2031118 00:23:21.130 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2031118 00:23:21.409 [2024-11-20 17:06:13.305461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.409 [2024-11-20 17:06:13.305797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.305801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.305806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda53e0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.307973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda31d0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.308998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda36c0 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.410 [2024-11-20 17:06:13.309793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.309996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3b90 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.310995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.411 [2024-11-20 17:06:13.311115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.311230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.412 [2024-11-20 17:06:13.312441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.312446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.312450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4530 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a20 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.413 [2024-11-20 17:06:13.313899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.313999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.314149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.321498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370790 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.321618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a610 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.321712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b9080 is same with the state(6) to be set 00:23:21.414 [2024-11-20 17:06:13.321800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.414 [2024-11-20 17:06:13.321817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.414 [2024-11-20 17:06:13.321824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2796360 is same with the state(6) to be set 00:23:21.415 [2024-11-20 17:06:13.321884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2793dc0 is same with the state(6) to be set 00:23:21.415 [2024-11-20 17:06:13.321973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.321990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.321998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372cb0 is same with the state(6) to be set 00:23:21.415 [2024-11-20 17:06:13.322057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372850 is same with the state(6) to be set 00:23:21.415 [2024-11-20 17:06:13.322141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370fc0 is same with the state(6) to be set 00:23:21.415 [2024-11-20 17:06:13.322238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.415 [2024-11-20 17:06:13.322293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27db290 is same with the state(6) to be set 00:23:21.415 [2024-11-20 17:06:13.322387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.415 [2024-11-20 17:06:13.322634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.415 [2024-11-20 17:06:13.322641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.322987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.322994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.416 [2024-11-20 17:06:13.323291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.416 [2024-11-20 17:06:13.323298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.417 [2024-11-20 17:06:13.323532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.417 [2024-11-20 17:06:13.323540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.417 [2024-11-20 17:06:13.323544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12[2024-11-20 17:06:13.323546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 he state(6) to be set 00:23:21.417 [2024-11-20 17:06:13.323555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.417 [2024-11-20 17:06:13.323556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4ef0 is same with the state(6) to be set 00:23:21.417 [2024-11-20 17:06:13.323567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.417 [2024-11-20 17:06:13.323982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.417 [2024-11-20 17:06:13.323993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.324125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.324132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.418 [2024-11-20 17:06:13.333458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.418 [2024-11-20 17:06:13.333467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.333584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.334125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.419 [2024-11-20 17:06:13.334147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.334157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.419 [2024-11-20 17:06:13.334177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.334185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.419 [2024-11-20 17:06:13.334193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.334201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.419 [2024-11-20 17:06:13.334209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.334217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b5730 is same with the state(6) to be set 00:23:21.419 [2024-11-20 17:06:13.334242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370790 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a610 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b9080 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2796360 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2793dc0 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372cb0 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372850 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370fc0 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.334366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27db290 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.338543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:21.419 [2024-11-20 17:06:13.338577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:21.419 [2024-11-20 17:06:13.339445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.419 [2024-11-20 17:06:13.339488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2372cb0 with addr=10.0.0.2, port=4420 00:23:21.419 [2024-11-20 17:06:13.339501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372cb0 is same with the state(6) to be set 00:23:21.419 [2024-11-20 17:06:13.339718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.419 [2024-11-20 17:06:13.339730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2372850 with addr=10.0.0.2, port=4420 00:23:21.419 [2024-11-20 17:06:13.339737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372850 is same with the state(6) to be set 00:23:21.419 [2024-11-20 17:06:13.340593] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.340637] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.340675] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.340712] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.340771] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.340847] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.340862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:21.419 [2024-11-20 17:06:13.340891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372cb0 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.340903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372850 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.341006] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.419 [2024-11-20 17:06:13.341432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.419 [2024-11-20 17:06:13.341472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370790 with addr=10.0.0.2, port=4420 00:23:21.419 [2024-11-20 17:06:13.341485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370790 is same with the state(6) to be set 00:23:21.419 [2024-11-20 17:06:13.341498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:21.419 [2024-11-20 17:06:13.341506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:21.419 [2024-11-20 17:06:13.341517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:21.419 [2024-11-20 17:06:13.341529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:21.419 [2024-11-20 17:06:13.341539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:21.419 [2024-11-20 17:06:13.341547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:21.419 [2024-11-20 17:06:13.341555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:21.419 [2024-11-20 17:06:13.341563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:21.419 [2024-11-20 17:06:13.341650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370790 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.341692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:21.419 [2024-11-20 17:06:13.341700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:21.419 [2024-11-20 17:06:13.341707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:21.419 [2024-11-20 17:06:13.341714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:21.419 [2024-11-20 17:06:13.344101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b5730 (9): Bad file descriptor 00:23:21.419 [2024-11-20 17:06:13.344264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.419 [2024-11-20 17:06:13.344412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.419 [2024-11-20 17:06:13.344421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.344989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.344996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.345006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.345013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.345023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.345040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.345047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.345057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.345064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.345074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.345081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.420 [2024-11-20 17:06:13.345091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.420 [2024-11-20 17:06:13.345098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.345376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.345384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2773dd0 is same with the state(6) to be set 00:23:21.421 [2024-11-20 17:06:13.346662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.346986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.346996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.347003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.347013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.347020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.347030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.347037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.347047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.421 [2024-11-20 17:06:13.347054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.421 [2024-11-20 17:06:13.347064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.422 [2024-11-20 17:06:13.347654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.422 [2024-11-20 17:06:13.347661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.347782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.347791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2775070 is same with the state(6) to be set 00:23:21.423 [2024-11-20 17:06:13.349059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.423 [2024-11-20 17:06:13.349601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.423 [2024-11-20 17:06:13.349608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.349990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.349997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.350151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.350163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2776310 is same with the state(6) to be set 00:23:21.424 [2024-11-20 17:06:13.351442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.351468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.351489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.351509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.351550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.424 [2024-11-20 17:06:13.351567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.424 [2024-11-20 17:06:13.351574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.351992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.351999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.425 [2024-11-20 17:06:13.352253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.425 [2024-11-20 17:06:13.352262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.352551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.352559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2777620 is same with the state(6) to be set 00:23:21.426 [2024-11-20 17:06:13.353821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.353989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.353996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.426 [2024-11-20 17:06:13.354198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.426 [2024-11-20 17:06:13.354205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.427 [2024-11-20 17:06:13.354782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.427 [2024-11-20 17:06:13.354792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.354919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.354927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2778930 is same with the state(6) to be set 00:23:21.428 [2024-11-20 17:06:13.356210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.428 [2024-11-20 17:06:13.356758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.428 [2024-11-20 17:06:13.356768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.356988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.356995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.429 [2024-11-20 17:06:13.357322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.429 [2024-11-20 17:06:13.357330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b3470 is same with the state(6) to be set 00:23:21.429 [2024-11-20 17:06:13.358592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:21.429 [2024-11-20 17:06:13.358612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:21.429 [2024-11-20 17:06:13.358625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:21.429 [2024-11-20 17:06:13.358639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:21.429 [2024-11-20 17:06:13.358726] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:21.429 [2024-11-20 17:06:13.358742] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:21.429 [2024-11-20 17:06:13.358817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:21.429 [2024-11-20 17:06:13.358830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:21.429 [2024-11-20 17:06:13.359099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.429 [2024-11-20 17:06:13.359115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370fc0 with addr=10.0.0.2, port=4420 00:23:21.429 [2024-11-20 17:06:13.359124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370fc0 is same with the state(6) to be set 00:23:21.429 [2024-11-20 17:06:13.359290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.429 [2024-11-20 17:06:13.359301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2796360 with addr=10.0.0.2, port=4420 00:23:21.429 [2024-11-20 17:06:13.359308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2796360 is same with the state(6) to be set 00:23:21.429 [2024-11-20 17:06:13.359607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.430 [2024-11-20 17:06:13.359617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2793dc0 with addr=10.0.0.2, port=4420 00:23:21.430 [2024-11-20 17:06:13.359625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2793dc0 is same with the state(6) to be set 00:23:21.430 [2024-11-20 17:06:13.359973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.430 [2024-11-20 17:06:13.359983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a610 with addr=10.0.0.2, port=4420 00:23:21.430 [2024-11-20 17:06:13.359990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a610 is same with the state(6) to be set 00:23:21.430 [2024-11-20 17:06:13.361338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.430 [2024-11-20 17:06:13.361970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.430 [2024-11-20 17:06:13.361980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.361987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.361996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.431 [2024-11-20 17:06:13.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.431 [2024-11-20 17:06:13.362450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2779c20 is same with the state(6) to be set 00:23:21.431 [2024-11-20 17:06:13.364234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:21.431 [2024-11-20 17:06:13.364260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:21.431 [2024-11-20 17:06:13.364269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:21.431 task offset: 26624 on job bdev=Nvme1n1 fails 00:23:21.431 00:23:21.431 Latency(us) 00:23:21.431 [2024-11-20T16:06:13.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.431 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.431 Job: Nvme1n1 ended in about 0.90 seconds with error 00:23:21.431 Verification LBA range: start 0x0 length 0x400 00:23:21.431 Nvme1n1 : 0.90 213.58 13.35 71.19 0.00 222055.04 18022.40 253405.87 00:23:21.431 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.431 Job: Nvme2n1 ended in about 0.90 seconds with error 00:23:21.431 Verification LBA range: start 0x0 length 0x400 00:23:21.431 Nvme2n1 : 0.90 213.31 13.33 71.10 0.00 217624.11 14527.15 235929.60 00:23:21.431 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.431 Verification LBA range: start 0x0 length 0x400 00:23:21.431 Nvme3n1 : 0.90 212.95 13.31 0.00 0.00 284327.54 17257.81 262144.00 00:23:21.431 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.431 Job: Nvme4n1 ended in about 0.91 seconds with error 00:23:21.431 Verification LBA range: start 0x0 length 0x400 00:23:21.431 Nvme4n1 : 0.91 211.00 13.19 70.33 0.00 210588.48 12561.07 258648.75 00:23:21.431 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.431 Job: Nvme5n1 ended in about 0.91 seconds with error 00:23:21.431 Verification LBA range: start 0x0 length 0x400 00:23:21.431 Nvme5n1 : 0.91 140.30 8.77 70.15 0.00 275323.45 21517.65 267386.88 00:23:21.431 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.431 Job: Nvme6n1 ended in about 0.91 seconds with error 00:23:21.432 Verification LBA range: start 0x0 length 0x400 00:23:21.432 Nvme6n1 : 0.91 144.31 9.02 69.97 0.00 264267.51 19442.35 272629.76 00:23:21.432 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.432 Job: Nvme7n1 ended in about 0.92 seconds with error 00:23:21.432 Verification LBA range: start 0x0 length 0x400 00:23:21.432 Nvme7n1 : 0.92 139.57 8.72 69.78 0.00 264257.71 19660.80 246415.36 00:23:21.432 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.432 Job: Nvme8n1 ended in about 0.92 seconds with error 00:23:21.432 Verification LBA range: start 0x0 length 0x400 00:23:21.432 Nvme8n1 : 0.92 214.25 13.39 69.60 0.00 190239.90 8628.91 253405.87 00:23:21.432 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.432 Job: Nvme9n1 ended in about 0.93 seconds with error 00:23:21.432 Verification LBA range: start 0x0 length 0x400 00:23:21.432 Nvme9n1 : 0.93 138.08 8.63 69.04 0.00 254936.75 36700.16 234181.97 00:23:21.432 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.432 Job: Nvme10n1 ended in about 0.92 seconds with error 00:23:21.432 Verification LBA range: start 0x0 length 0x400 00:23:21.432 Nvme10n1 : 0.92 138.85 8.68 69.42 0.00 246866.77 17694.72 253405.87 00:23:21.432 [2024-11-20T16:06:13.608Z] =================================================================================================================== 00:23:21.432 [2024-11-20T16:06:13.608Z] Total : 1766.20 110.39 630.60 0.00 239109.52 8628.91 272629.76 00:23:21.432 [2024-11-20 17:06:13.388040] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.432 [2024-11-20 17:06:13.388070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:21.432 [2024-11-20 17:06:13.388442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.432 [2024-11-20 17:06:13.388459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27b9080 with addr=10.0.0.2, port=4420 00:23:21.432 [2024-11-20 17:06:13.388469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b9080 is same with the state(6) to be set 00:23:21.432 [2024-11-20 17:06:13.388750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.432 [2024-11-20 17:06:13.388760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27db290 with addr=10.0.0.2, port=4420 00:23:21.432 [2024-11-20 17:06:13.388768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27db290 is same with the state(6) to be set 00:23:21.432 [2024-11-20 17:06:13.388780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370fc0 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.388793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2796360 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.388802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2793dc0 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.388811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a610 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.389275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.432 [2024-11-20 17:06:13.389290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2372850 with addr=10.0.0.2, port=4420 00:23:21.432 [2024-11-20 17:06:13.389298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372850 is same with the state(6) to be set 00:23:21.432 [2024-11-20 17:06:13.389572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.432 [2024-11-20 17:06:13.389582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2372cb0 with addr=10.0.0.2, port=4420 00:23:21.432 [2024-11-20 17:06:13.389594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372cb0 is same with the state(6) to be set 00:23:21.432 [2024-11-20 17:06:13.389653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.432 [2024-11-20 17:06:13.389663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370790 with addr=10.0.0.2, port=4420 00:23:21.432 [2024-11-20 17:06:13.389670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370790 is same with the state(6) to be set 00:23:21.432 [2024-11-20 17:06:13.389834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.432 [2024-11-20 17:06:13.389846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27b5730 with addr=10.0.0.2, port=4420 00:23:21.432 [2024-11-20 17:06:13.389853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b5730 is same with the state(6) to be set 00:23:21.432 [2024-11-20 17:06:13.389862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b9080 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.389872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27db290 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.389881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.389888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.389897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.389906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.389914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.389920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.389927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.389934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.389941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.389947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.389955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.389961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.389968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.389975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.389981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.389988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.390039] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:21.432 [2024-11-20 17:06:13.390051] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:21.432 [2024-11-20 17:06:13.390416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372850 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.390433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372cb0 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.390442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370790 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.390452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b5730 (9): Bad file descriptor 00:23:21.432 [2024-11-20 17:06:13.390460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.390466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.390474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.390480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.390487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.390493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.390501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.390507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.390545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:21.432 [2024-11-20 17:06:13.390556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:21.432 [2024-11-20 17:06:13.390565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:21.432 [2024-11-20 17:06:13.390573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:21.432 [2024-11-20 17:06:13.390606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.390613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.390619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.390626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.390634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.390640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.390647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.390653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.390660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.390666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.390673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:21.432 [2024-11-20 17:06:13.390679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:21.432 [2024-11-20 17:06:13.390688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:21.432 [2024-11-20 17:06:13.390694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:21.432 [2024-11-20 17:06:13.390703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:21.433 [2024-11-20 17:06:13.390711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:21.433 [2024-11-20 17:06:13.391002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.433 [2024-11-20 17:06:13.391015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a610 with addr=10.0.0.2, port=4420 00:23:21.433 [2024-11-20 17:06:13.391022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a610 is same with the state(6) to be set 00:23:21.433 [2024-11-20 17:06:13.391240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.433 [2024-11-20 17:06:13.391250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2793dc0 with addr=10.0.0.2, port=4420 00:23:21.433 [2024-11-20 17:06:13.391257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2793dc0 is same with the state(6) to be set 00:23:21.433 [2024-11-20 17:06:13.391601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.433 [2024-11-20 17:06:13.391611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2796360 with addr=10.0.0.2, port=4420 00:23:21.433 [2024-11-20 17:06:13.391619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2796360 is same with the state(6) to be set 00:23:21.433 [2024-11-20 17:06:13.391899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.433 [2024-11-20 17:06:13.391909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370fc0 with addr=10.0.0.2, port=4420 00:23:21.433 [2024-11-20 17:06:13.391916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370fc0 is same with the state(6) to be set 00:23:21.433 [2024-11-20 17:06:13.391944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a610 (9): Bad file descriptor 00:23:21.433 [2024-11-20 17:06:13.391954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2793dc0 (9): Bad file descriptor 00:23:21.433 [2024-11-20 17:06:13.391964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2796360 (9): Bad file descriptor 00:23:21.433 [2024-11-20 17:06:13.391973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370fc0 (9): Bad file descriptor 00:23:21.433 [2024-11-20 17:06:13.392000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:21.433 [2024-11-20 17:06:13.392008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:21.433 [2024-11-20 17:06:13.392015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:21.433 [2024-11-20 17:06:13.392021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:21.433 [2024-11-20 17:06:13.392028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:21.433 [2024-11-20 17:06:13.392034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:21.433 [2024-11-20 17:06:13.392041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:21.433 [2024-11-20 17:06:13.392047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:21.433 [2024-11-20 17:06:13.392056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:21.433 [2024-11-20 17:06:13.392063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:21.433 [2024-11-20 17:06:13.392069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:21.433 [2024-11-20 17:06:13.392079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:21.433 [2024-11-20 17:06:13.392087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:21.433 [2024-11-20 17:06:13.392093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:21.433 [2024-11-20 17:06:13.392100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:21.433 [2024-11-20 17:06:13.392107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:21.433 17:06:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2031770 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2031770 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2031770 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.819 rmmod nvme_tcp 00:23:22.819 rmmod nvme_fabrics 00:23:22.819 rmmod nvme_keyring 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2031118 ']' 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2031118 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2031118 ']' 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2031118 00:23:22.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2031118) - No such process 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2031118 is not found' 00:23:22.819 Process with pid 2031118 is not found 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.819 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.733 00:23:24.733 real 0m7.971s 00:23:24.733 user 0m19.978s 00:23:24.733 sys 0m1.290s 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:24.733 ************************************ 00:23:24.733 END TEST nvmf_shutdown_tc3 00:23:24.733 ************************************ 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.733 ************************************ 00:23:24.733 START TEST nvmf_shutdown_tc4 00:23:24.733 ************************************ 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.733 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:24.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:24.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:24.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:24.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.734 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.735 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.735 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.996 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.996 17:06:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:23:24.996 00:23:24.996 --- 10.0.0.2 ping statistics --- 00:23:24.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.996 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:23:24.996 00:23:24.996 --- 10.0.0.1 ping statistics --- 00:23:24.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.996 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.996 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2033313 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2033313 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2033313 ']' 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.257 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:25.257 [2024-11-20 17:06:17.258832] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:25.257 [2024-11-20 17:06:17.258899] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.257 [2024-11-20 17:06:17.354841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.257 [2024-11-20 17:06:17.393718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.257 [2024-11-20 17:06:17.393755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.257 [2024-11-20 17:06:17.393760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.257 [2024-11-20 17:06:17.393766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.257 [2024-11-20 17:06:17.393770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.257 [2024-11-20 17:06:17.395232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.257 [2024-11-20 17:06:17.395534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.257 [2024-11-20 17:06:17.395689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.257 [2024-11-20 17:06:17.395689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:26.197 [2024-11-20 17:06:18.112943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.197 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:26.197 Malloc1 00:23:26.197 [2024-11-20 17:06:18.224723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.197 Malloc2 00:23:26.197 Malloc3 00:23:26.197 Malloc4 00:23:26.197 Malloc5 00:23:26.497 Malloc6 00:23:26.497 Malloc7 00:23:26.497 Malloc8 00:23:26.497 Malloc9 00:23:26.497 Malloc10 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2033545 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:26.497 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:26.810 [2024-11-20 17:06:18.701589] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2033313 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2033313 ']' 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2033313 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2033313 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2033313' 00:23:32.109 killing process with pid 2033313 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2033313 00:23:32.109 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2033313 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 [2024-11-20 17:06:23.701800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 [2024-11-20 17:06:23.702684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.109 starting I/O failed: -6 00:23:32.109 starting I/O failed: -6 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.109 starting I/O failed: -6 00:23:32.109 Write completed with error (sct=0, sc=8) 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.703526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a340 is same with the state(6) to be set 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 [2024-11-20 17:06:23.703563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a340 is same with the state(6) to be set 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.703570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a340 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 [2024-11-20 17:06:23.703575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a340 is same with the state(6) to be set 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.703812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.110 [2024-11-20 17:06:23.703813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a830 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.703835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a830 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.703841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a830 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.703846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a830 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.703851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a830 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.703856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a830 is same with the state(6) to be set 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.704022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ad20 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.704047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ad20 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 [2024-11-20 17:06:23.704053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ad20 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.704059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ad20 is same with the state(6) to be set 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.704064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ad20 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 [2024-11-20 17:06:23.704069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ad20 is same with Write completed with error (sct=0, sc=8) 00:23:32.110 the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.704279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979e50 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 [2024-11-20 17:06:23.704300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979e50 is same with Write completed with error (sct=0, sc=8) 00:23:32.110 the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.704307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979e50 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 [2024-11-20 17:06:23.704312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979e50 is same with the state(6) to be set 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 [2024-11-20 17:06:23.704317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979e50 is same with the state(6) to be set 00:23:32.110 [2024-11-20 17:06:23.704322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979e50 is same with the state(6) to be set 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.110 Write completed with error (sct=0, sc=8) 00:23:32.110 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 [2024-11-20 17:06:23.704872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aabb40 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.704884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aabb40 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.704889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aabb40 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.704894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aabb40 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.704898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aabb40 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.704904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aabb40 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 [2024-11-20 17:06:23.705080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.111 NVMe io qpair process completion error 00:23:32.111 [2024-11-20 17:06:23.705940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c7c0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.705955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c7c0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.705961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c7c0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.705965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c7c0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.705970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c7c0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.705975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c7c0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cc90 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d180 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 [2024-11-20 17:06:23.706630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 [2024-11-20 17:06:23.706644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with starting I/O failed: -6 00:23:32.111 the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 [2024-11-20 17:06:23.706657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 [2024-11-20 17:06:23.706672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 [2024-11-20 17:06:23.706687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 [2024-11-20 17:06:23.706692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c2f0 is same with the state(6) to be set 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.111 starting I/O failed: -6 00:23:32.111 Write completed with error (sct=0, sc=8) 00:23:32.112 [2024-11-20 17:06:23.706882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 [2024-11-20 17:06:23.708130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 starting I/O failed: -6 00:23:32.112 NVMe io qpair process completion error 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 [2024-11-20 17:06:23.710361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.112 starting I/O failed: -6 00:23:32.112 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 [2024-11-20 17:06:23.711191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 [2024-11-20 17:06:23.712134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.113 Write completed with error (sct=0, sc=8) 00:23:32.113 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 [2024-11-20 17:06:23.714557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.114 NVMe io qpair process completion error 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 [2024-11-20 17:06:23.715813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 [2024-11-20 17:06:23.716665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 Write completed with error (sct=0, sc=8) 00:23:32.114 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 [2024-11-20 17:06:23.717595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 [2024-11-20 17:06:23.720057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.115 NVMe io qpair process completion error 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 Write completed with error (sct=0, sc=8) 00:23:32.115 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 [2024-11-20 17:06:23.720959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 [2024-11-20 17:06:23.721771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 [2024-11-20 17:06:23.722708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.116 starting I/O failed: -6 00:23:32.116 starting I/O failed: -6 00:23:32.116 starting I/O failed: -6 00:23:32.116 starting I/O failed: -6 00:23:32.116 starting I/O failed: -6 00:23:32.116 starting I/O failed: -6 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.116 starting I/O failed: -6 00:23:32.116 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 [2024-11-20 17:06:23.724740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.117 NVMe io qpair process completion error 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 [2024-11-20 17:06:23.726083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 [2024-11-20 17:06:23.726900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.117 starting I/O failed: -6 00:23:32.117 starting I/O failed: -6 00:23:32.117 starting I/O failed: -6 00:23:32.117 starting I/O failed: -6 00:23:32.117 starting I/O failed: -6 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.117 starting I/O failed: -6 00:23:32.117 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 [2024-11-20 17:06:23.728261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 [2024-11-20 17:06:23.730818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.118 NVMe io qpair process completion error 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 Write completed with error (sct=0, sc=8) 00:23:32.118 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 [2024-11-20 17:06:23.732039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 [2024-11-20 17:06:23.732878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 [2024-11-20 17:06:23.733821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 [2024-11-20 17:06:23.735542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.120 NVMe io qpair process completion error 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 [2024-11-20 17:06:23.736846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed: -6 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 [2024-11-20 17:06:23.737697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 [2024-11-20 17:06:23.738639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.121 Write completed with error (sct=0, sc=8) 00:23:32.121 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 [2024-11-20 17:06:23.741569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.122 NVMe io qpair process completion error 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 Write completed with error (sct=0, sc=8) 00:23:32.122 starting I/O failed: -6 00:23:32.123 [2024-11-20 17:06:23.742843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 [2024-11-20 17:06:23.743783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 [2024-11-20 17:06:23.744702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.123 Write completed with error (sct=0, sc=8) 00:23:32.123 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 [2024-11-20 17:06:23.746542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.124 NVMe io qpair process completion error 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 [2024-11-20 17:06:23.747716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:32.124 starting I/O failed: -6 00:23:32.124 starting I/O failed: -6 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 [2024-11-20 17:06:23.748710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.124 Write completed with error (sct=0, sc=8) 00:23:32.124 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 [2024-11-20 17:06:23.749645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.125 starting I/O failed: -6 00:23:32.125 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 starting I/O failed: -6 00:23:32.126 [2024-11-20 17:06:23.752299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:32.126 NVMe io qpair process completion error 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Write completed with error (sct=0, sc=8) 00:23:32.126 Initializing NVMe Controllers 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:32.126 Controller IO queue size 128, less than required. 00:23:32.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:32.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:32.126 Initialization complete. Launching workers. 00:23:32.126 ======================================================== 00:23:32.126 Latency(us) 00:23:32.126 Device Information : IOPS MiB/s Average min max 00:23:32.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1873.03 80.48 68358.37 679.09 137395.92 00:23:32.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1868.92 80.31 67784.80 720.68 145556.90 00:23:32.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1876.06 80.61 68020.84 562.14 118148.39 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1865.89 80.18 67928.46 660.34 119746.62 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1836.25 78.90 69060.96 850.75 119115.70 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1828.46 78.57 69388.14 863.27 118487.84 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1853.13 79.63 68497.98 600.64 118590.59 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1877.79 80.69 67635.48 665.73 130199.76 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1855.94 79.75 68460.67 867.98 132276.86 00:23:32.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1861.35 79.98 68302.77 918.40 118984.06 00:23:32.127 ======================================================== 00:23:32.127 Total : 18596.82 799.08 68339.73 562.14 145556.90 00:23:32.127 00:23:32.127 [2024-11-20 17:06:23.758855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648410 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.758899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647560 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.758930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648a70 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.758965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649ae0 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.758995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647890 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.759025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649720 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.759054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649900 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.759083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647bc0 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.759112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647ef0 is same with the state(6) to be set 00:23:32.127 [2024-11-20 17:06:23.759140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648740 is same with the state(6) to be set 00:23:32.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:32.127 17:06:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2033545 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2033545 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2033545 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.070 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.070 rmmod nvme_tcp 00:23:33.070 rmmod nvme_fabrics 00:23:33.070 rmmod nvme_keyring 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2033313 ']' 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2033313 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2033313 ']' 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2033313 00:23:33.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2033313) - No such process 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2033313 is not found' 00:23:33.070 Process with pid 2033313 is not found 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.070 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.985 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.985 00:23:34.985 real 0m10.274s 00:23:34.985 user 0m27.933s 00:23:34.985 sys 0m4.048s 00:23:34.985 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.985 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.985 ************************************ 00:23:34.985 END TEST nvmf_shutdown_tc4 00:23:34.985 ************************************ 00:23:34.985 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:34.985 00:23:34.985 real 0m43.772s 00:23:34.985 user 1m46.514s 00:23:34.985 sys 0m14.019s 00:23:34.985 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.985 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.985 ************************************ 00:23:34.985 END TEST nvmf_shutdown 00:23:34.985 ************************************ 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:35.246 ************************************ 00:23:35.246 START TEST nvmf_nsid 00:23:35.246 ************************************ 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:35.246 * Looking for test storage... 00:23:35.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:35.246 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.509 --rc genhtml_branch_coverage=1 00:23:35.509 --rc genhtml_function_coverage=1 00:23:35.509 --rc genhtml_legend=1 00:23:35.509 --rc geninfo_all_blocks=1 00:23:35.509 --rc geninfo_unexecuted_blocks=1 00:23:35.509 00:23:35.509 ' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.509 --rc genhtml_branch_coverage=1 00:23:35.509 --rc genhtml_function_coverage=1 00:23:35.509 --rc genhtml_legend=1 00:23:35.509 --rc geninfo_all_blocks=1 00:23:35.509 --rc geninfo_unexecuted_blocks=1 00:23:35.509 00:23:35.509 ' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.509 --rc genhtml_branch_coverage=1 00:23:35.509 --rc genhtml_function_coverage=1 00:23:35.509 --rc genhtml_legend=1 00:23:35.509 --rc geninfo_all_blocks=1 00:23:35.509 --rc geninfo_unexecuted_blocks=1 00:23:35.509 00:23:35.509 ' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.509 --rc genhtml_branch_coverage=1 00:23:35.509 --rc genhtml_function_coverage=1 00:23:35.509 --rc genhtml_legend=1 00:23:35.509 --rc geninfo_all_blocks=1 00:23:35.509 --rc geninfo_unexecuted_blocks=1 00:23:35.509 00:23:35.509 ' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.509 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:35.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:35.510 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.651 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.651 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.651 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.651 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:43.652 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:43.652 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:43.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:43.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:23:43.652 00:23:43.652 --- 10.0.0.2 ping statistics --- 00:23:43.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.652 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:43.652 00:23:43.652 --- 10.0.0.1 ping statistics --- 00:23:43.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.652 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.652 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.653 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2039001 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2039001 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2039001 ']' 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.653 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.653 [2024-11-20 17:06:35.088399] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:43.653 [2024-11-20 17:06:35.088469] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.653 [2024-11-20 17:06:35.187126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.653 [2024-11-20 17:06:35.237931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.653 [2024-11-20 17:06:35.237982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.653 [2024-11-20 17:06:35.237994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.653 [2024-11-20 17:06:35.238004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.653 [2024-11-20 17:06:35.238011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.653 [2024-11-20 17:06:35.238756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2039085 00:23:43.914 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=961e2182-639e-4228-aaba-70b12b107e79 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5fda27a0-14ab-4a50-924b-a9f4bd541cc8 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7ac76653-3131-4f5f-83a1-02cf1dfed999 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.915 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.915 null0 00:23:43.915 null1 00:23:43.915 null2 00:23:43.915 [2024-11-20 17:06:36.013090] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:43.915 [2024-11-20 17:06:36.013175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039085 ] 00:23:43.915 [2024-11-20 17:06:36.014690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.915 [2024-11-20 17:06:36.038959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2039085 /var/tmp/tgt2.sock 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2039085 ']' 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:43.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.915 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:44.176 [2024-11-20 17:06:36.107894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.176 [2024-11-20 17:06:36.160143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.436 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.436 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:44.436 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:44.698 [2024-11-20 17:06:36.733467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.698 [2024-11-20 17:06:36.749650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:44.698 nvme0n1 nvme0n2 00:23:44.698 nvme1n1 00:23:44.698 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:44.698 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:44.698 17:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:46.085 17:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 961e2182-639e-4228-aaba-70b12b107e79 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=961e2182639e4228aaba70b12b107e79 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 961E2182639E4228AABA70B12B107E79 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 961E2182639E4228AABA70B12B107E79 == \9\6\1\E\2\1\8\2\6\3\9\E\4\2\2\8\A\A\B\A\7\0\B\1\2\B\1\0\7\E\7\9 ]] 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:47.471 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5fda27a0-14ab-4a50-924b-a9f4bd541cc8 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5fda27a014ab4a50924ba9f4bd541cc8 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5FDA27A014AB4A50924BA9F4BD541CC8 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5FDA27A014AB4A50924BA9F4BD541CC8 == \5\F\D\A\2\7\A\0\1\4\A\B\4\A\5\0\9\2\4\B\A\9\F\4\B\D\5\4\1\C\C\8 ]] 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7ac76653-3131-4f5f-83a1-02cf1dfed999 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7ac7665331314f5f83a102cf1dfed999 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7AC7665331314F5F83A102CF1DFED999 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7AC7665331314F5F83A102CF1DFED999 == \7\A\C\7\6\6\5\3\3\1\3\1\4\F\5\F\8\3\A\1\0\2\C\F\1\D\F\E\D\9\9\9 ]] 00:23:47.472 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2039085 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2039085 ']' 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2039085 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2039085 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2039085' 00:23:47.733 killing process with pid 2039085 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2039085 00:23:47.733 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2039085 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.993 17:06:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.993 rmmod nvme_tcp 00:23:47.993 rmmod nvme_fabrics 00:23:47.993 rmmod nvme_keyring 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2039001 ']' 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2039001 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2039001 ']' 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2039001 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2039001 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.993 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2039001' 00:23:47.993 killing process with pid 2039001 00:23:47.994 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2039001 00:23:47.994 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2039001 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.255 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.256 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.256 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.256 17:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.170 17:06:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.170 00:23:50.170 real 0m15.065s 00:23:50.170 user 0m11.546s 00:23:50.170 sys 0m6.912s 00:23:50.170 17:06:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.170 17:06:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:50.170 ************************************ 00:23:50.170 END TEST nvmf_nsid 00:23:50.170 ************************************ 00:23:50.432 17:06:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:50.432 00:23:50.432 real 13m6.676s 00:23:50.432 user 27m26.858s 00:23:50.432 sys 3m56.115s 00:23:50.433 17:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.433 17:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:50.433 ************************************ 00:23:50.433 END TEST nvmf_target_extra 00:23:50.433 ************************************ 00:23:50.433 17:06:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:50.433 17:06:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.433 17:06:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.433 17:06:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.433 ************************************ 00:23:50.433 START TEST nvmf_host 00:23:50.433 ************************************ 00:23:50.433 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:50.433 * Looking for test storage... 00:23:50.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:50.433 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:50.433 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:50.433 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.695 --rc genhtml_branch_coverage=1 00:23:50.695 --rc genhtml_function_coverage=1 00:23:50.695 --rc genhtml_legend=1 00:23:50.695 --rc geninfo_all_blocks=1 00:23:50.695 --rc geninfo_unexecuted_blocks=1 00:23:50.695 00:23:50.695 ' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.695 --rc genhtml_branch_coverage=1 00:23:50.695 --rc genhtml_function_coverage=1 00:23:50.695 --rc genhtml_legend=1 00:23:50.695 --rc geninfo_all_blocks=1 00:23:50.695 --rc geninfo_unexecuted_blocks=1 00:23:50.695 00:23:50.695 ' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.695 --rc genhtml_branch_coverage=1 00:23:50.695 --rc genhtml_function_coverage=1 00:23:50.695 --rc genhtml_legend=1 00:23:50.695 --rc geninfo_all_blocks=1 00:23:50.695 --rc geninfo_unexecuted_blocks=1 00:23:50.695 00:23:50.695 ' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.695 --rc genhtml_branch_coverage=1 00:23:50.695 --rc genhtml_function_coverage=1 00:23:50.695 --rc genhtml_legend=1 00:23:50.695 --rc geninfo_all_blocks=1 00:23:50.695 --rc geninfo_unexecuted_blocks=1 00:23:50.695 00:23:50.695 ' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.695 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.696 ************************************ 00:23:50.696 START TEST nvmf_multicontroller 00:23:50.696 ************************************ 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:50.696 * Looking for test storage... 00:23:50.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:50.696 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:50.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.958 --rc genhtml_branch_coverage=1 00:23:50.958 --rc genhtml_function_coverage=1 00:23:50.958 --rc genhtml_legend=1 00:23:50.958 --rc geninfo_all_blocks=1 00:23:50.958 --rc geninfo_unexecuted_blocks=1 00:23:50.958 00:23:50.958 ' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:50.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.958 --rc genhtml_branch_coverage=1 00:23:50.958 --rc genhtml_function_coverage=1 00:23:50.958 --rc genhtml_legend=1 00:23:50.958 --rc geninfo_all_blocks=1 00:23:50.958 --rc geninfo_unexecuted_blocks=1 00:23:50.958 00:23:50.958 ' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:50.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.958 --rc genhtml_branch_coverage=1 00:23:50.958 --rc genhtml_function_coverage=1 00:23:50.958 --rc genhtml_legend=1 00:23:50.958 --rc geninfo_all_blocks=1 00:23:50.958 --rc geninfo_unexecuted_blocks=1 00:23:50.958 00:23:50.958 ' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:50.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.958 --rc genhtml_branch_coverage=1 00:23:50.958 --rc genhtml_function_coverage=1 00:23:50.958 --rc genhtml_legend=1 00:23:50.958 --rc geninfo_all_blocks=1 00:23:50.958 --rc geninfo_unexecuted_blocks=1 00:23:50.958 00:23:50.958 ' 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.958 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.959 17:06:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.103 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:59.104 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:59.104 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:59.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:59.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:23:59.104 00:23:59.104 --- 10.0.0.2 ping statistics --- 00:23:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.104 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:23:59.104 00:23:59.104 --- 10.0.0.1 ping statistics --- 00:23:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.104 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2044182 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2044182 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2044182 ']' 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.104 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.105 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.105 17:06:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.105 [2024-11-20 17:06:50.508729] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:23:59.105 [2024-11-20 17:06:50.508802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.105 [2024-11-20 17:06:50.608736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.105 [2024-11-20 17:06:50.662371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.105 [2024-11-20 17:06:50.662418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.105 [2024-11-20 17:06:50.662429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.105 [2024-11-20 17:06:50.662437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.105 [2024-11-20 17:06:50.662444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.105 [2024-11-20 17:06:50.664512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.105 [2024-11-20 17:06:50.664676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.105 [2024-11-20 17:06:50.664678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.366 [2024-11-20 17:06:51.357763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.366 Malloc0 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.366 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 [2024-11-20 17:06:51.434757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 [2024-11-20 17:06:51.446632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 Malloc1 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2044534 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2044534 /var/tmp/bdevperf.sock 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2044534 ']' 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.367 17:06:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.310 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.310 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:00.310 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:00.310 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.310 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.571 NVMe0n1 00:24:00.571 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.571 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.571 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.571 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:00.571 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.572 1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.572 request: 00:24:00.572 { 00:24:00.572 "name": "NVMe0", 00:24:00.572 "trtype": "tcp", 00:24:00.572 "traddr": "10.0.0.2", 00:24:00.572 "adrfam": "ipv4", 00:24:00.572 "trsvcid": "4420", 00:24:00.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.572 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:00.572 "hostaddr": "10.0.0.1", 00:24:00.572 "prchk_reftag": false, 00:24:00.572 "prchk_guard": false, 00:24:00.572 "hdgst": false, 00:24:00.572 "ddgst": false, 00:24:00.572 "allow_unrecognized_csi": false, 00:24:00.572 "method": "bdev_nvme_attach_controller", 00:24:00.572 "req_id": 1 00:24:00.572 } 00:24:00.572 Got JSON-RPC error response 00:24:00.572 response: 00:24:00.572 { 00:24:00.572 "code": -114, 00:24:00.572 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:00.572 } 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.572 request: 00:24:00.572 { 00:24:00.572 "name": "NVMe0", 00:24:00.572 "trtype": "tcp", 00:24:00.572 "traddr": "10.0.0.2", 00:24:00.572 "adrfam": "ipv4", 00:24:00.572 "trsvcid": "4420", 00:24:00.572 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:00.572 "hostaddr": "10.0.0.1", 00:24:00.572 "prchk_reftag": false, 00:24:00.572 "prchk_guard": false, 00:24:00.572 "hdgst": false, 00:24:00.572 "ddgst": false, 00:24:00.572 "allow_unrecognized_csi": false, 00:24:00.572 "method": "bdev_nvme_attach_controller", 00:24:00.572 "req_id": 1 00:24:00.572 } 00:24:00.572 Got JSON-RPC error response 00:24:00.572 response: 00:24:00.572 { 00:24:00.572 "code": -114, 00:24:00.572 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:00.572 } 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.572 request: 00:24:00.572 { 00:24:00.572 "name": "NVMe0", 00:24:00.572 "trtype": "tcp", 00:24:00.572 "traddr": "10.0.0.2", 00:24:00.572 "adrfam": "ipv4", 00:24:00.572 "trsvcid": "4420", 00:24:00.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.572 "hostaddr": "10.0.0.1", 00:24:00.572 "prchk_reftag": false, 00:24:00.572 "prchk_guard": false, 00:24:00.572 "hdgst": false, 00:24:00.572 "ddgst": false, 00:24:00.572 "multipath": "disable", 00:24:00.572 "allow_unrecognized_csi": false, 00:24:00.572 "method": "bdev_nvme_attach_controller", 00:24:00.572 "req_id": 1 00:24:00.572 } 00:24:00.572 Got JSON-RPC error response 00:24:00.572 response: 00:24:00.572 { 00:24:00.572 "code": -114, 00:24:00.572 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:00.572 } 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.572 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.572 request: 00:24:00.572 { 00:24:00.572 "name": "NVMe0", 00:24:00.572 "trtype": "tcp", 00:24:00.572 "traddr": "10.0.0.2", 00:24:00.572 "adrfam": "ipv4", 00:24:00.572 "trsvcid": "4420", 00:24:00.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.572 "hostaddr": "10.0.0.1", 00:24:00.572 "prchk_reftag": false, 00:24:00.572 "prchk_guard": false, 00:24:00.572 "hdgst": false, 00:24:00.572 "ddgst": false, 00:24:00.572 "multipath": "failover", 00:24:00.572 "allow_unrecognized_csi": false, 00:24:00.572 "method": "bdev_nvme_attach_controller", 00:24:00.572 "req_id": 1 00:24:00.572 } 00:24:00.572 Got JSON-RPC error response 00:24:00.572 response: 00:24:00.572 { 00:24:00.572 "code": -114, 00:24:00.572 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:00.573 } 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.573 NVMe0n1 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.573 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:00.833 17:06:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.218 { 00:24:02.218 "results": [ 00:24:02.219 { 00:24:02.219 "job": "NVMe0n1", 00:24:02.219 "core_mask": "0x1", 00:24:02.219 "workload": "write", 00:24:02.219 "status": "finished", 00:24:02.219 "queue_depth": 128, 00:24:02.219 "io_size": 4096, 00:24:02.219 "runtime": 1.006297, 00:24:02.219 "iops": 27726.406816277897, 00:24:02.219 "mibps": 108.30627662608553, 00:24:02.219 "io_failed": 0, 00:24:02.219 "io_timeout": 0, 00:24:02.219 "avg_latency_us": 4602.36783484463, 00:24:02.219 "min_latency_us": 2880.8533333333335, 00:24:02.219 "max_latency_us": 11523.413333333334 00:24:02.219 } 00:24:02.219 ], 00:24:02.219 "core_count": 1 00:24:02.219 } 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2044534 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2044534 ']' 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2044534 00:24:02.219 17:06:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044534 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044534' 00:24:02.219 killing process with pid 2044534 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2044534 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2044534 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:02.219 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:02.219 [2024-11-20 17:06:51.576697] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:02.219 [2024-11-20 17:06:51.576778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044534 ] 00:24:02.219 [2024-11-20 17:06:51.670364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.219 [2024-11-20 17:06:51.724246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.219 [2024-11-20 17:06:52.831622] bdev.c:4912:bdev_name_add: *ERROR*: Bdev name 3fb24b49-ed2e-458e-b3ff-ee6c28ac768d already exists 00:24:02.219 [2024-11-20 17:06:52.831653] bdev.c:8112:bdev_register: *ERROR*: Unable to add uuid:3fb24b49-ed2e-458e-b3ff-ee6c28ac768d alias for bdev NVMe1n1 00:24:02.219 [2024-11-20 17:06:52.831662] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:02.219 Running I/O for 1 seconds... 00:24:02.219 27725.00 IOPS, 108.30 MiB/s 00:24:02.219 Latency(us) 00:24:02.219 [2024-11-20T16:06:54.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.219 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:02.219 NVMe0n1 : 1.01 27726.41 108.31 0.00 0.00 4602.37 2880.85 11523.41 00:24:02.219 [2024-11-20T16:06:54.395Z] =================================================================================================================== 00:24:02.219 [2024-11-20T16:06:54.395Z] Total : 27726.41 108.31 0.00 0.00 4602.37 2880.85 11523.41 00:24:02.219 Received shutdown signal, test time was about 1.000000 seconds 00:24:02.219 00:24:02.219 Latency(us) 00:24:02.219 [2024-11-20T16:06:54.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.219 [2024-11-20T16:06:54.395Z] =================================================================================================================== 00:24:02.219 [2024-11-20T16:06:54.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.219 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.219 rmmod nvme_tcp 00:24:02.219 rmmod nvme_fabrics 00:24:02.219 rmmod nvme_keyring 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2044182 ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2044182 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2044182 ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2044182 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044182 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044182' 00:24:02.219 killing process with pid 2044182 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2044182 00:24:02.219 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2044182 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.481 17:06:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.394 17:06:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.655 00:24:04.655 real 0m13.867s 00:24:04.655 user 0m16.541s 00:24:04.655 sys 0m6.579s 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.655 ************************************ 00:24:04.655 END TEST nvmf_multicontroller 00:24:04.655 ************************************ 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.655 ************************************ 00:24:04.655 START TEST nvmf_aer 00:24:04.655 ************************************ 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:04.655 * Looking for test storage... 00:24:04.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.655 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.917 --rc genhtml_branch_coverage=1 00:24:04.917 --rc genhtml_function_coverage=1 00:24:04.917 --rc genhtml_legend=1 00:24:04.917 --rc geninfo_all_blocks=1 00:24:04.917 --rc geninfo_unexecuted_blocks=1 00:24:04.917 00:24:04.917 ' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.917 --rc genhtml_branch_coverage=1 00:24:04.917 --rc genhtml_function_coverage=1 00:24:04.917 --rc genhtml_legend=1 00:24:04.917 --rc geninfo_all_blocks=1 00:24:04.917 --rc geninfo_unexecuted_blocks=1 00:24:04.917 00:24:04.917 ' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.917 --rc genhtml_branch_coverage=1 00:24:04.917 --rc genhtml_function_coverage=1 00:24:04.917 --rc genhtml_legend=1 00:24:04.917 --rc geninfo_all_blocks=1 00:24:04.917 --rc geninfo_unexecuted_blocks=1 00:24:04.917 00:24:04.917 ' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.917 --rc genhtml_branch_coverage=1 00:24:04.917 --rc genhtml_function_coverage=1 00:24:04.917 --rc genhtml_legend=1 00:24:04.917 --rc geninfo_all_blocks=1 00:24:04.917 --rc geninfo_unexecuted_blocks=1 00:24:04.917 00:24:04.917 ' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.917 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.918 17:06:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:13.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:13.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:13.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:13.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:13.062 00:24:13.062 --- 10.0.0.2 ping statistics --- 00:24:13.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.062 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:24:13.062 00:24:13.062 --- 10.0.0.1 ping statistics --- 00:24:13.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.062 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.062 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2049221 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2049221 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2049221 ']' 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.063 17:07:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.063 [2024-11-20 17:07:04.493414] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:13.063 [2024-11-20 17:07:04.493482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.063 [2024-11-20 17:07:04.593804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.063 [2024-11-20 17:07:04.646953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.063 [2024-11-20 17:07:04.647005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.063 [2024-11-20 17:07:04.647013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.063 [2024-11-20 17:07:04.647021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.063 [2024-11-20 17:07:04.647027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.063 [2024-11-20 17:07:04.649444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.063 [2024-11-20 17:07:04.649602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.063 [2024-11-20 17:07:04.649764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.063 [2024-11-20 17:07:04.649765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 [2024-11-20 17:07:05.375624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 Malloc0 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 [2024-11-20 17:07:05.453937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.323 [ 00:24:13.323 { 00:24:13.323 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.323 "subtype": "Discovery", 00:24:13.323 "listen_addresses": [], 00:24:13.323 "allow_any_host": true, 00:24:13.323 "hosts": [] 00:24:13.323 }, 00:24:13.323 { 00:24:13.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.323 "subtype": "NVMe", 00:24:13.323 "listen_addresses": [ 00:24:13.323 { 00:24:13.323 "trtype": "TCP", 00:24:13.323 "adrfam": "IPv4", 00:24:13.323 "traddr": "10.0.0.2", 00:24:13.323 "trsvcid": "4420" 00:24:13.323 } 00:24:13.323 ], 00:24:13.323 "allow_any_host": true, 00:24:13.323 "hosts": [], 00:24:13.323 "serial_number": "SPDK00000000000001", 00:24:13.323 "model_number": "SPDK bdev Controller", 00:24:13.323 "max_namespaces": 2, 00:24:13.323 "min_cntlid": 1, 00:24:13.323 "max_cntlid": 65519, 00:24:13.323 "namespaces": [ 00:24:13.323 { 00:24:13.323 "nsid": 1, 00:24:13.323 "bdev_name": "Malloc0", 00:24:13.323 "name": "Malloc0", 00:24:13.323 "nguid": "233E992D98BD4112B6AD13956000F963", 00:24:13.323 "uuid": "233e992d-98bd-4112-b6ad-13956000f963" 00:24:13.323 } 00:24:13.323 ] 00:24:13.323 } 00:24:13.323 ] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2049423 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:13.323 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:13.583 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.844 Malloc1 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.844 Asynchronous Event Request test 00:24:13.844 Attaching to 10.0.0.2 00:24:13.844 Attached to 10.0.0.2 00:24:13.844 Registering asynchronous event callbacks... 00:24:13.844 Starting namespace attribute notice tests for all controllers... 00:24:13.844 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:13.844 aer_cb - Changed Namespace 00:24:13.844 Cleaning up... 00:24:13.844 [ 00:24:13.844 { 00:24:13.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.844 "subtype": "Discovery", 00:24:13.844 "listen_addresses": [], 00:24:13.844 "allow_any_host": true, 00:24:13.844 "hosts": [] 00:24:13.844 }, 00:24:13.844 { 00:24:13.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.844 "subtype": "NVMe", 00:24:13.844 "listen_addresses": [ 00:24:13.844 { 00:24:13.844 "trtype": "TCP", 00:24:13.844 "adrfam": "IPv4", 00:24:13.844 "traddr": "10.0.0.2", 00:24:13.844 "trsvcid": "4420" 00:24:13.844 } 00:24:13.844 ], 00:24:13.844 "allow_any_host": true, 00:24:13.844 "hosts": [], 00:24:13.844 "serial_number": "SPDK00000000000001", 00:24:13.844 "model_number": "SPDK bdev Controller", 00:24:13.844 "max_namespaces": 2, 00:24:13.844 "min_cntlid": 1, 00:24:13.844 "max_cntlid": 65519, 00:24:13.844 "namespaces": [ 00:24:13.844 { 00:24:13.844 "nsid": 1, 00:24:13.844 "bdev_name": "Malloc0", 00:24:13.844 "name": "Malloc0", 00:24:13.844 "nguid": "233E992D98BD4112B6AD13956000F963", 00:24:13.844 "uuid": "233e992d-98bd-4112-b6ad-13956000f963" 00:24:13.844 }, 00:24:13.844 { 00:24:13.844 "nsid": 2, 00:24:13.844 "bdev_name": "Malloc1", 00:24:13.844 "name": "Malloc1", 00:24:13.844 "nguid": "419D590BEDB2488DAF2D336C14B16A37", 00:24:13.844 "uuid": "419d590b-edb2-488d-af2d-336c14b16a37" 00:24:13.844 } 00:24:13.844 ] 00:24:13.844 } 00:24:13.844 ] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2049423 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.844 rmmod nvme_tcp 00:24:13.844 rmmod nvme_fabrics 00:24:13.844 rmmod nvme_keyring 00:24:13.844 17:07:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2049221 ']' 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2049221 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2049221 ']' 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2049221 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:13.844 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2049221 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2049221' 00:24:14.104 killing process with pid 2049221 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2049221 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2049221 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.104 17:07:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.648 17:07:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.649 00:24:16.649 real 0m11.667s 00:24:16.649 user 0m8.549s 00:24:16.649 sys 0m6.260s 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.649 ************************************ 00:24:16.649 END TEST nvmf_aer 00:24:16.649 ************************************ 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.649 ************************************ 00:24:16.649 START TEST nvmf_async_init 00:24:16.649 ************************************ 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:16.649 * Looking for test storage... 00:24:16.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.649 --rc genhtml_branch_coverage=1 00:24:16.649 --rc genhtml_function_coverage=1 00:24:16.649 --rc genhtml_legend=1 00:24:16.649 --rc geninfo_all_blocks=1 00:24:16.649 --rc geninfo_unexecuted_blocks=1 00:24:16.649 00:24:16.649 ' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.649 --rc genhtml_branch_coverage=1 00:24:16.649 --rc genhtml_function_coverage=1 00:24:16.649 --rc genhtml_legend=1 00:24:16.649 --rc geninfo_all_blocks=1 00:24:16.649 --rc geninfo_unexecuted_blocks=1 00:24:16.649 00:24:16.649 ' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.649 --rc genhtml_branch_coverage=1 00:24:16.649 --rc genhtml_function_coverage=1 00:24:16.649 --rc genhtml_legend=1 00:24:16.649 --rc geninfo_all_blocks=1 00:24:16.649 --rc geninfo_unexecuted_blocks=1 00:24:16.649 00:24:16.649 ' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.649 --rc genhtml_branch_coverage=1 00:24:16.649 --rc genhtml_function_coverage=1 00:24:16.649 --rc genhtml_legend=1 00:24:16.649 --rc geninfo_all_blocks=1 00:24:16.649 --rc geninfo_unexecuted_blocks=1 00:24:16.649 00:24:16.649 ' 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:16.649 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3347934a0dab4c8dae2bb1d7437817a5 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.650 17:07:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:24.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:24.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:24.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:24.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.796 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.797 17:07:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:24:24.797 00:24:24.797 --- 10.0.0.2 ping statistics --- 00:24:24.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.797 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:24:24.797 00:24:24.797 --- 10.0.0.1 ping statistics --- 00:24:24.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.797 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2053690 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2053690 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2053690 ']' 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.797 17:07:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.797 [2024-11-20 17:07:16.308656] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:24.797 [2024-11-20 17:07:16.308723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.797 [2024-11-20 17:07:16.409632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.797 [2024-11-20 17:07:16.461603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.797 [2024-11-20 17:07:16.461657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.797 [2024-11-20 17:07:16.461666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.797 [2024-11-20 17:07:16.461673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.797 [2024-11-20 17:07:16.461679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.797 [2024-11-20 17:07:16.462464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.057 [2024-11-20 17:07:17.190550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.057 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.058 null0 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.058 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3347934a0dab4c8dae2bb1d7437817a5 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.318 [2024-11-20 17:07:17.250966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.318 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.579 nvme0n1 00:24:25.579 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.579 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.579 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.579 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.579 [ 00:24:25.579 { 00:24:25.579 "name": "nvme0n1", 00:24:25.579 "aliases": [ 00:24:25.579 "3347934a-0dab-4c8d-ae2b-b1d7437817a5" 00:24:25.579 ], 00:24:25.579 "product_name": "NVMe disk", 00:24:25.579 "block_size": 512, 00:24:25.579 "num_blocks": 2097152, 00:24:25.579 "uuid": "3347934a-0dab-4c8d-ae2b-b1d7437817a5", 00:24:25.579 "numa_id": 0, 00:24:25.579 "assigned_rate_limits": { 00:24:25.579 "rw_ios_per_sec": 0, 00:24:25.579 "rw_mbytes_per_sec": 0, 00:24:25.579 "r_mbytes_per_sec": 0, 00:24:25.579 "w_mbytes_per_sec": 0 00:24:25.579 }, 00:24:25.579 "claimed": false, 00:24:25.579 "zoned": false, 00:24:25.579 "supported_io_types": { 00:24:25.579 "read": true, 00:24:25.579 "write": true, 00:24:25.579 "unmap": false, 00:24:25.579 "flush": true, 00:24:25.579 "reset": true, 00:24:25.579 "nvme_admin": true, 00:24:25.579 "nvme_io": true, 00:24:25.579 "nvme_io_md": false, 00:24:25.579 "write_zeroes": true, 00:24:25.579 "zcopy": false, 00:24:25.579 "get_zone_info": false, 00:24:25.579 "zone_management": false, 00:24:25.579 "zone_append": false, 00:24:25.579 "compare": true, 00:24:25.579 "compare_and_write": true, 00:24:25.579 "abort": true, 00:24:25.579 "seek_hole": false, 00:24:25.579 "seek_data": false, 00:24:25.579 "copy": true, 00:24:25.579 "nvme_iov_md": false 00:24:25.579 }, 00:24:25.579 "memory_domains": [ 00:24:25.579 { 00:24:25.579 "dma_device_id": "system", 00:24:25.579 "dma_device_type": 1 00:24:25.579 } 00:24:25.579 ], 00:24:25.579 "driver_specific": { 00:24:25.579 "nvme": [ 00:24:25.579 { 00:24:25.579 "trid": { 00:24:25.579 "trtype": "TCP", 00:24:25.579 "adrfam": "IPv4", 00:24:25.579 "traddr": "10.0.0.2", 00:24:25.579 "trsvcid": "4420", 00:24:25.579 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.579 }, 00:24:25.579 "ctrlr_data": { 00:24:25.579 "cntlid": 1, 00:24:25.579 "vendor_id": "0x8086", 00:24:25.579 "model_number": "SPDK bdev Controller", 00:24:25.579 "serial_number": "00000000000000000000", 00:24:25.579 "firmware_revision": "25.01", 00:24:25.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.579 "oacs": { 00:24:25.579 "security": 0, 00:24:25.579 "format": 0, 00:24:25.579 "firmware": 0, 00:24:25.579 "ns_manage": 0 00:24:25.579 }, 00:24:25.579 "multi_ctrlr": true, 00:24:25.579 "ana_reporting": false 00:24:25.579 }, 00:24:25.580 "vs": { 00:24:25.580 "nvme_version": "1.3" 00:24:25.580 }, 00:24:25.580 "ns_data": { 00:24:25.580 "id": 1, 00:24:25.580 "can_share": true 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ], 00:24:25.580 "mp_policy": "active_passive" 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 [2024-11-20 17:07:17.527459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:25.580 [2024-11-20 17:07:17.527549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aece0 (9): Bad file descriptor 00:24:25.580 [2024-11-20 17:07:17.659266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 [ 00:24:25.580 { 00:24:25.580 "name": "nvme0n1", 00:24:25.580 "aliases": [ 00:24:25.580 "3347934a-0dab-4c8d-ae2b-b1d7437817a5" 00:24:25.580 ], 00:24:25.580 "product_name": "NVMe disk", 00:24:25.580 "block_size": 512, 00:24:25.580 "num_blocks": 2097152, 00:24:25.580 "uuid": "3347934a-0dab-4c8d-ae2b-b1d7437817a5", 00:24:25.580 "numa_id": 0, 00:24:25.580 "assigned_rate_limits": { 00:24:25.580 "rw_ios_per_sec": 0, 00:24:25.580 "rw_mbytes_per_sec": 0, 00:24:25.580 "r_mbytes_per_sec": 0, 00:24:25.580 "w_mbytes_per_sec": 0 00:24:25.580 }, 00:24:25.580 "claimed": false, 00:24:25.580 "zoned": false, 00:24:25.580 "supported_io_types": { 00:24:25.580 "read": true, 00:24:25.580 "write": true, 00:24:25.580 "unmap": false, 00:24:25.580 "flush": true, 00:24:25.580 "reset": true, 00:24:25.580 "nvme_admin": true, 00:24:25.580 "nvme_io": true, 00:24:25.580 "nvme_io_md": false, 00:24:25.580 "write_zeroes": true, 00:24:25.580 "zcopy": false, 00:24:25.580 "get_zone_info": false, 00:24:25.580 "zone_management": false, 00:24:25.580 "zone_append": false, 00:24:25.580 "compare": true, 00:24:25.580 "compare_and_write": true, 00:24:25.580 "abort": true, 00:24:25.580 "seek_hole": false, 00:24:25.580 "seek_data": false, 00:24:25.580 "copy": true, 00:24:25.580 "nvme_iov_md": false 00:24:25.580 }, 00:24:25.580 "memory_domains": [ 00:24:25.580 { 00:24:25.580 "dma_device_id": "system", 00:24:25.580 "dma_device_type": 1 00:24:25.580 } 00:24:25.580 ], 00:24:25.580 "driver_specific": { 00:24:25.580 "nvme": [ 00:24:25.580 { 00:24:25.580 "trid": { 00:24:25.580 "trtype": "TCP", 00:24:25.580 "adrfam": "IPv4", 00:24:25.580 "traddr": "10.0.0.2", 00:24:25.580 "trsvcid": "4420", 00:24:25.580 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.580 }, 00:24:25.580 "ctrlr_data": { 00:24:25.580 "cntlid": 2, 00:24:25.580 "vendor_id": "0x8086", 00:24:25.580 "model_number": "SPDK bdev Controller", 00:24:25.580 "serial_number": "00000000000000000000", 00:24:25.580 "firmware_revision": "25.01", 00:24:25.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.580 "oacs": { 00:24:25.580 "security": 0, 00:24:25.580 "format": 0, 00:24:25.580 "firmware": 0, 00:24:25.580 "ns_manage": 0 00:24:25.580 }, 00:24:25.580 "multi_ctrlr": true, 00:24:25.580 "ana_reporting": false 00:24:25.580 }, 00:24:25.580 "vs": { 00:24:25.580 "nvme_version": "1.3" 00:24:25.580 }, 00:24:25.580 "ns_data": { 00:24:25.580 "id": 1, 00:24:25.580 "can_share": true 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ], 00:24:25.580 "mp_policy": "active_passive" 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zFrAjGjyNV 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zFrAjGjyNV 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.zFrAjGjyNV 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.580 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 [2024-11-20 17:07:17.748126] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.580 [2024-11-20 17:07:17.748297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.842 [2024-11-20 17:07:17.772207] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.842 nvme0n1 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.842 [ 00:24:25.842 { 00:24:25.842 "name": "nvme0n1", 00:24:25.842 "aliases": [ 00:24:25.842 "3347934a-0dab-4c8d-ae2b-b1d7437817a5" 00:24:25.842 ], 00:24:25.842 "product_name": "NVMe disk", 00:24:25.842 "block_size": 512, 00:24:25.842 "num_blocks": 2097152, 00:24:25.842 "uuid": "3347934a-0dab-4c8d-ae2b-b1d7437817a5", 00:24:25.842 "numa_id": 0, 00:24:25.842 "assigned_rate_limits": { 00:24:25.842 "rw_ios_per_sec": 0, 00:24:25.842 "rw_mbytes_per_sec": 0, 00:24:25.842 "r_mbytes_per_sec": 0, 00:24:25.842 "w_mbytes_per_sec": 0 00:24:25.842 }, 00:24:25.842 "claimed": false, 00:24:25.842 "zoned": false, 00:24:25.842 "supported_io_types": { 00:24:25.842 "read": true, 00:24:25.842 "write": true, 00:24:25.842 "unmap": false, 00:24:25.842 "flush": true, 00:24:25.842 "reset": true, 00:24:25.842 "nvme_admin": true, 00:24:25.842 "nvme_io": true, 00:24:25.842 "nvme_io_md": false, 00:24:25.842 "write_zeroes": true, 00:24:25.842 "zcopy": false, 00:24:25.842 "get_zone_info": false, 00:24:25.842 "zone_management": false, 00:24:25.842 "zone_append": false, 00:24:25.842 "compare": true, 00:24:25.842 "compare_and_write": true, 00:24:25.842 "abort": true, 00:24:25.842 "seek_hole": false, 00:24:25.842 "seek_data": false, 00:24:25.842 "copy": true, 00:24:25.842 "nvme_iov_md": false 00:24:25.842 }, 00:24:25.842 "memory_domains": [ 00:24:25.842 { 00:24:25.842 "dma_device_id": "system", 00:24:25.842 "dma_device_type": 1 00:24:25.842 } 00:24:25.842 ], 00:24:25.842 "driver_specific": { 00:24:25.842 "nvme": [ 00:24:25.842 { 00:24:25.842 "trid": { 00:24:25.842 "trtype": "TCP", 00:24:25.842 "adrfam": "IPv4", 00:24:25.842 "traddr": "10.0.0.2", 00:24:25.842 "trsvcid": "4421", 00:24:25.842 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.842 }, 00:24:25.842 "ctrlr_data": { 00:24:25.842 "cntlid": 3, 00:24:25.842 "vendor_id": "0x8086", 00:24:25.842 "model_number": "SPDK bdev Controller", 00:24:25.842 "serial_number": "00000000000000000000", 00:24:25.842 "firmware_revision": "25.01", 00:24:25.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.842 "oacs": { 00:24:25.842 "security": 0, 00:24:25.842 "format": 0, 00:24:25.842 "firmware": 0, 00:24:25.842 "ns_manage": 0 00:24:25.842 }, 00:24:25.842 "multi_ctrlr": true, 00:24:25.842 "ana_reporting": false 00:24:25.842 }, 00:24:25.842 "vs": { 00:24:25.842 "nvme_version": "1.3" 00:24:25.842 }, 00:24:25.842 "ns_data": { 00:24:25.842 "id": 1, 00:24:25.842 "can_share": true 00:24:25.842 } 00:24:25.842 } 00:24:25.842 ], 00:24:25.842 "mp_policy": "active_passive" 00:24:25.842 } 00:24:25.842 } 00:24:25.842 ] 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.zFrAjGjyNV 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.842 rmmod nvme_tcp 00:24:25.842 rmmod nvme_fabrics 00:24:25.842 rmmod nvme_keyring 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2053690 ']' 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2053690 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2053690 ']' 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2053690 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.842 17:07:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2053690 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2053690' 00:24:26.103 killing process with pid 2053690 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2053690 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2053690 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.103 17:07:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.667 00:24:28.667 real 0m11.866s 00:24:28.667 user 0m4.233s 00:24:28.667 sys 0m6.226s 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.667 ************************************ 00:24:28.667 END TEST nvmf_async_init 00:24:28.667 ************************************ 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.667 ************************************ 00:24:28.667 START TEST dma 00:24:28.667 ************************************ 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:28.667 * Looking for test storage... 00:24:28.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:28.667 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.668 --rc genhtml_branch_coverage=1 00:24:28.668 --rc genhtml_function_coverage=1 00:24:28.668 --rc genhtml_legend=1 00:24:28.668 --rc geninfo_all_blocks=1 00:24:28.668 --rc geninfo_unexecuted_blocks=1 00:24:28.668 00:24:28.668 ' 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.668 --rc genhtml_branch_coverage=1 00:24:28.668 --rc genhtml_function_coverage=1 00:24:28.668 --rc genhtml_legend=1 00:24:28.668 --rc geninfo_all_blocks=1 00:24:28.668 --rc geninfo_unexecuted_blocks=1 00:24:28.668 00:24:28.668 ' 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.668 --rc genhtml_branch_coverage=1 00:24:28.668 --rc genhtml_function_coverage=1 00:24:28.668 --rc genhtml_legend=1 00:24:28.668 --rc geninfo_all_blocks=1 00:24:28.668 --rc geninfo_unexecuted_blocks=1 00:24:28.668 00:24:28.668 ' 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.668 --rc genhtml_branch_coverage=1 00:24:28.668 --rc genhtml_function_coverage=1 00:24:28.668 --rc genhtml_legend=1 00:24:28.668 --rc geninfo_all_blocks=1 00:24:28.668 --rc geninfo_unexecuted_blocks=1 00:24:28.668 00:24:28.668 ' 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:28.668 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.669 17:07:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:28.670 00:24:28.670 real 0m0.240s 00:24:28.670 user 0m0.153s 00:24:28.670 sys 0m0.102s 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:28.670 ************************************ 00:24:28.670 END TEST dma 00:24:28.670 ************************************ 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.670 ************************************ 00:24:28.670 START TEST nvmf_identify 00:24:28.670 ************************************ 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:28.670 * Looking for test storage... 00:24:28.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.670 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.932 --rc genhtml_branch_coverage=1 00:24:28.932 --rc genhtml_function_coverage=1 00:24:28.932 --rc genhtml_legend=1 00:24:28.932 --rc geninfo_all_blocks=1 00:24:28.932 --rc geninfo_unexecuted_blocks=1 00:24:28.932 00:24:28.932 ' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.932 --rc genhtml_branch_coverage=1 00:24:28.932 --rc genhtml_function_coverage=1 00:24:28.932 --rc genhtml_legend=1 00:24:28.932 --rc geninfo_all_blocks=1 00:24:28.932 --rc geninfo_unexecuted_blocks=1 00:24:28.932 00:24:28.932 ' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.932 --rc genhtml_branch_coverage=1 00:24:28.932 --rc genhtml_function_coverage=1 00:24:28.932 --rc genhtml_legend=1 00:24:28.932 --rc geninfo_all_blocks=1 00:24:28.932 --rc geninfo_unexecuted_blocks=1 00:24:28.932 00:24:28.932 ' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.932 --rc genhtml_branch_coverage=1 00:24:28.932 --rc genhtml_function_coverage=1 00:24:28.932 --rc genhtml_legend=1 00:24:28.932 --rc geninfo_all_blocks=1 00:24:28.932 --rc geninfo_unexecuted_blocks=1 00:24:28.932 00:24:28.932 ' 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.932 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.933 17:07:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:37.211 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:37.211 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:37.211 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:37.211 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:24:37.211 00:24:37.211 --- 10.0.0.2 ping statistics --- 00:24:37.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.211 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:24:37.211 00:24:37.211 --- 10.0.0.1 ping statistics --- 00:24:37.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.211 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.211 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2058320 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2058320 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2058320 ']' 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.212 17:07:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.212 [2024-11-20 17:07:28.495399] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:37.212 [2024-11-20 17:07:28.495464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.212 [2024-11-20 17:07:28.598689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.212 [2024-11-20 17:07:28.652503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.212 [2024-11-20 17:07:28.652556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.212 [2024-11-20 17:07:28.652565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.212 [2024-11-20 17:07:28.652572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.212 [2024-11-20 17:07:28.652578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.212 [2024-11-20 17:07:28.654960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.212 [2024-11-20 17:07:28.655118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.212 [2024-11-20 17:07:28.655279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.212 [2024-11-20 17:07:28.655280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.212 [2024-11-20 17:07:29.329102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.212 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 Malloc0 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 [2024-11-20 17:07:29.450479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.509 [ 00:24:37.509 { 00:24:37.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:37.509 "subtype": "Discovery", 00:24:37.509 "listen_addresses": [ 00:24:37.509 { 00:24:37.509 "trtype": "TCP", 00:24:37.509 "adrfam": "IPv4", 00:24:37.509 "traddr": "10.0.0.2", 00:24:37.509 "trsvcid": "4420" 00:24:37.509 } 00:24:37.509 ], 00:24:37.509 "allow_any_host": true, 00:24:37.509 "hosts": [] 00:24:37.509 }, 00:24:37.509 { 00:24:37.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.509 "subtype": "NVMe", 00:24:37.509 "listen_addresses": [ 00:24:37.509 { 00:24:37.509 "trtype": "TCP", 00:24:37.509 "adrfam": "IPv4", 00:24:37.509 "traddr": "10.0.0.2", 00:24:37.509 "trsvcid": "4420" 00:24:37.509 } 00:24:37.509 ], 00:24:37.509 "allow_any_host": true, 00:24:37.509 "hosts": [], 00:24:37.509 "serial_number": "SPDK00000000000001", 00:24:37.509 "model_number": "SPDK bdev Controller", 00:24:37.509 "max_namespaces": 32, 00:24:37.509 "min_cntlid": 1, 00:24:37.509 "max_cntlid": 65519, 00:24:37.509 "namespaces": [ 00:24:37.509 { 00:24:37.509 "nsid": 1, 00:24:37.509 "bdev_name": "Malloc0", 00:24:37.509 "name": "Malloc0", 00:24:37.509 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:37.509 "eui64": "ABCDEF0123456789", 00:24:37.509 "uuid": "10dfe82b-7c9d-4ba4-ad8d-2030ee65e08a" 00:24:37.509 } 00:24:37.509 ] 00:24:37.509 } 00:24:37.509 ] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.509 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:37.509 [2024-11-20 17:07:29.514401] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:37.509 [2024-11-20 17:07:29.514450] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058671 ] 00:24:37.509 [2024-11-20 17:07:29.571824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:37.509 [2024-11-20 17:07:29.571894] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:37.509 [2024-11-20 17:07:29.571901] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:37.509 [2024-11-20 17:07:29.571919] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:37.509 [2024-11-20 17:07:29.571933] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:37.509 [2024-11-20 17:07:29.572777] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:37.509 [2024-11-20 17:07:29.572826] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x140f690 0 00:24:37.509 [2024-11-20 17:07:29.583176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:37.509 [2024-11-20 17:07:29.583194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:37.509 [2024-11-20 17:07:29.583199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:37.509 [2024-11-20 17:07:29.583203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:37.509 [2024-11-20 17:07:29.583249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.509 [2024-11-20 17:07:29.583256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.509 [2024-11-20 17:07:29.583261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.509 [2024-11-20 17:07:29.583279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:37.509 [2024-11-20 17:07:29.583302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.509 [2024-11-20 17:07:29.591171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.509 [2024-11-20 17:07:29.591181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.591185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.591205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:37.510 [2024-11-20 17:07:29.591214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:37.510 [2024-11-20 17:07:29.591220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:37.510 [2024-11-20 17:07:29.591237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.591255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.591278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.591493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.591500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.591503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.591513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:37.510 [2024-11-20 17:07:29.591521] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:37.510 [2024-11-20 17:07:29.591529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.591544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.591555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.591743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.591751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.591754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.591766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:37.510 [2024-11-20 17:07:29.591780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:37.510 [2024-11-20 17:07:29.591789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.591797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.591804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.591815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.592027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.592033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.592037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.592046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:37.510 [2024-11-20 17:07:29.592056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.592071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.592081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.592276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.592286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.592290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.592299] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:37.510 [2024-11-20 17:07:29.592304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:37.510 [2024-11-20 17:07:29.592312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:37.510 [2024-11-20 17:07:29.592422] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:37.510 [2024-11-20 17:07:29.592428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:37.510 [2024-11-20 17:07:29.592439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.592454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.592465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.592672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.592678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.592681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.592690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:37.510 [2024-11-20 17:07:29.592700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.592715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.592725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.592904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.592912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.592915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.592924] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:37.510 [2024-11-20 17:07:29.592929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:37.510 [2024-11-20 17:07:29.592939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:37.510 [2024-11-20 17:07:29.592950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:37.510 [2024-11-20 17:07:29.592966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.592971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.510 [2024-11-20 17:07:29.592978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-11-20 17:07:29.592989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.510 [2024-11-20 17:07:29.593254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.510 [2024-11-20 17:07:29.593264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.510 [2024-11-20 17:07:29.593270] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.593275] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x140f690): datao=0, datal=4096, cccid=0 00:24:37.510 [2024-11-20 17:07:29.593280] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1471100) on tqpair(0x140f690): expected_datao=0, payload_size=4096 00:24:37.510 [2024-11-20 17:07:29.593285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.593294] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.593299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.634323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.510 [2024-11-20 17:07:29.634336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.510 [2024-11-20 17:07:29.634340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.510 [2024-11-20 17:07:29.634344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.510 [2024-11-20 17:07:29.634357] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:37.510 [2024-11-20 17:07:29.634362] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:37.510 [2024-11-20 17:07:29.634367] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:37.510 [2024-11-20 17:07:29.634378] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:37.510 [2024-11-20 17:07:29.634383] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:37.510 [2024-11-20 17:07:29.634389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:37.510 [2024-11-20 17:07:29.634401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:37.511 [2024-11-20 17:07:29.634408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.634425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:37.511 [2024-11-20 17:07:29.634439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.511 [2024-11-20 17:07:29.634587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.511 [2024-11-20 17:07:29.634593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.511 [2024-11-20 17:07:29.634597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.511 [2024-11-20 17:07:29.634610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.634628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.511 [2024-11-20 17:07:29.634635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.634648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.511 [2024-11-20 17:07:29.634654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.634667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.511 [2024-11-20 17:07:29.634673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.634686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.511 [2024-11-20 17:07:29.634691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:37.511 [2024-11-20 17:07:29.634700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:37.511 [2024-11-20 17:07:29.634707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.634718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-11-20 17:07:29.634729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471100, cid 0, qid 0 00:24:37.511 [2024-11-20 17:07:29.634735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471280, cid 1, qid 0 00:24:37.511 [2024-11-20 17:07:29.634740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471400, cid 2, qid 0 00:24:37.511 [2024-11-20 17:07:29.634744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.511 [2024-11-20 17:07:29.634749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471700, cid 4, qid 0 00:24:37.511 [2024-11-20 17:07:29.634982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.511 [2024-11-20 17:07:29.634989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.511 [2024-11-20 17:07:29.634992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.634996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471700) on tqpair=0x140f690 00:24:37.511 [2024-11-20 17:07:29.635005] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:37.511 [2024-11-20 17:07:29.635011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:37.511 [2024-11-20 17:07:29.635023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.635027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.635033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-11-20 17:07:29.635047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471700, cid 4, qid 0 00:24:37.511 [2024-11-20 17:07:29.639171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.511 [2024-11-20 17:07:29.639182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.511 [2024-11-20 17:07:29.639186] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x140f690): datao=0, datal=4096, cccid=4 00:24:37.511 [2024-11-20 17:07:29.639194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1471700) on tqpair(0x140f690): expected_datao=0, payload_size=4096 00:24:37.511 [2024-11-20 17:07:29.639199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639206] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639210] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.511 [2024-11-20 17:07:29.639222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.511 [2024-11-20 17:07:29.639226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471700) on tqpair=0x140f690 00:24:37.511 [2024-11-20 17:07:29.639246] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:37.511 [2024-11-20 17:07:29.639274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.639285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-11-20 17:07:29.639293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.639307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.511 [2024-11-20 17:07:29.639323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471700, cid 4, qid 0 00:24:37.511 [2024-11-20 17:07:29.639328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471880, cid 5, qid 0 00:24:37.511 [2024-11-20 17:07:29.639606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.511 [2024-11-20 17:07:29.639612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.511 [2024-11-20 17:07:29.639616] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639619] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x140f690): datao=0, datal=1024, cccid=4 00:24:37.511 [2024-11-20 17:07:29.639624] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1471700) on tqpair(0x140f690): expected_datao=0, payload_size=1024 00:24:37.511 [2024-11-20 17:07:29.639628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639635] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639639] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.511 [2024-11-20 17:07:29.639650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.511 [2024-11-20 17:07:29.639654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.639657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471880) on tqpair=0x140f690 00:24:37.511 [2024-11-20 17:07:29.681172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.511 [2024-11-20 17:07:29.681185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.511 [2024-11-20 17:07:29.681189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.681193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471700) on tqpair=0x140f690 00:24:37.511 [2024-11-20 17:07:29.681207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.511 [2024-11-20 17:07:29.681211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x140f690) 00:24:37.511 [2024-11-20 17:07:29.681219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-11-20 17:07:29.681236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471700, cid 4, qid 0 00:24:37.777 [2024-11-20 17:07:29.681500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.777 [2024-11-20 17:07:29.681511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.777 [2024-11-20 17:07:29.681515] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.681519] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x140f690): datao=0, datal=3072, cccid=4 00:24:37.777 [2024-11-20 17:07:29.681524] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1471700) on tqpair(0x140f690): expected_datao=0, payload_size=3072 00:24:37.777 [2024-11-20 17:07:29.681529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.681548] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.681552] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.722348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.777 [2024-11-20 17:07:29.722359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.777 [2024-11-20 17:07:29.722363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.722367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471700) on tqpair=0x140f690 00:24:37.777 [2024-11-20 17:07:29.722379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.722383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x140f690) 00:24:37.777 [2024-11-20 17:07:29.722390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.777 [2024-11-20 17:07:29.722407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471700, cid 4, qid 0 00:24:37.777 [2024-11-20 17:07:29.722597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.777 [2024-11-20 17:07:29.722604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.777 [2024-11-20 17:07:29.722607] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.722611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x140f690): datao=0, datal=8, cccid=4 00:24:37.777 [2024-11-20 17:07:29.722615] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1471700) on tqpair(0x140f690): expected_datao=0, payload_size=8 00:24:37.777 [2024-11-20 17:07:29.722620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.722627] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.722630] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.767171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.777 [2024-11-20 17:07:29.767181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.777 [2024-11-20 17:07:29.767185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.777 [2024-11-20 17:07:29.767189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471700) on tqpair=0x140f690 00:24:37.777 ===================================================== 00:24:37.777 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:37.777 ===================================================== 00:24:37.777 Controller Capabilities/Features 00:24:37.777 ================================ 00:24:37.777 Vendor ID: 0000 00:24:37.777 Subsystem Vendor ID: 0000 00:24:37.777 Serial Number: .................... 00:24:37.777 Model Number: ........................................ 00:24:37.777 Firmware Version: 25.01 00:24:37.777 Recommended Arb Burst: 0 00:24:37.777 IEEE OUI Identifier: 00 00 00 00:24:37.777 Multi-path I/O 00:24:37.777 May have multiple subsystem ports: No 00:24:37.777 May have multiple controllers: No 00:24:37.777 Associated with SR-IOV VF: No 00:24:37.777 Max Data Transfer Size: 131072 00:24:37.777 Max Number of Namespaces: 0 00:24:37.777 Max Number of I/O Queues: 1024 00:24:37.777 NVMe Specification Version (VS): 1.3 00:24:37.777 NVMe Specification Version (Identify): 1.3 00:24:37.777 Maximum Queue Entries: 128 00:24:37.777 Contiguous Queues Required: Yes 00:24:37.777 Arbitration Mechanisms Supported 00:24:37.777 Weighted Round Robin: Not Supported 00:24:37.777 Vendor Specific: Not Supported 00:24:37.777 Reset Timeout: 15000 ms 00:24:37.777 Doorbell Stride: 4 bytes 00:24:37.777 NVM Subsystem Reset: Not Supported 00:24:37.777 Command Sets Supported 00:24:37.777 NVM Command Set: Supported 00:24:37.777 Boot Partition: Not Supported 00:24:37.777 Memory Page Size Minimum: 4096 bytes 00:24:37.777 Memory Page Size Maximum: 4096 bytes 00:24:37.777 Persistent Memory Region: Not Supported 00:24:37.777 Optional Asynchronous Events Supported 00:24:37.777 Namespace Attribute Notices: Not Supported 00:24:37.777 Firmware Activation Notices: Not Supported 00:24:37.777 ANA Change Notices: Not Supported 00:24:37.777 PLE Aggregate Log Change Notices: Not Supported 00:24:37.777 LBA Status Info Alert Notices: Not Supported 00:24:37.777 EGE Aggregate Log Change Notices: Not Supported 00:24:37.777 Normal NVM Subsystem Shutdown event: Not Supported 00:24:37.777 Zone Descriptor Change Notices: Not Supported 00:24:37.777 Discovery Log Change Notices: Supported 00:24:37.777 Controller Attributes 00:24:37.777 128-bit Host Identifier: Not Supported 00:24:37.777 Non-Operational Permissive Mode: Not Supported 00:24:37.777 NVM Sets: Not Supported 00:24:37.777 Read Recovery Levels: Not Supported 00:24:37.777 Endurance Groups: Not Supported 00:24:37.777 Predictable Latency Mode: Not Supported 00:24:37.777 Traffic Based Keep ALive: Not Supported 00:24:37.777 Namespace Granularity: Not Supported 00:24:37.778 SQ Associations: Not Supported 00:24:37.778 UUID List: Not Supported 00:24:37.778 Multi-Domain Subsystem: Not Supported 00:24:37.778 Fixed Capacity Management: Not Supported 00:24:37.778 Variable Capacity Management: Not Supported 00:24:37.778 Delete Endurance Group: Not Supported 00:24:37.778 Delete NVM Set: Not Supported 00:24:37.778 Extended LBA Formats Supported: Not Supported 00:24:37.778 Flexible Data Placement Supported: Not Supported 00:24:37.778 00:24:37.778 Controller Memory Buffer Support 00:24:37.778 ================================ 00:24:37.778 Supported: No 00:24:37.778 00:24:37.778 Persistent Memory Region Support 00:24:37.778 ================================ 00:24:37.778 Supported: No 00:24:37.778 00:24:37.778 Admin Command Set Attributes 00:24:37.778 ============================ 00:24:37.778 Security Send/Receive: Not Supported 00:24:37.778 Format NVM: Not Supported 00:24:37.778 Firmware Activate/Download: Not Supported 00:24:37.778 Namespace Management: Not Supported 00:24:37.778 Device Self-Test: Not Supported 00:24:37.778 Directives: Not Supported 00:24:37.778 NVMe-MI: Not Supported 00:24:37.778 Virtualization Management: Not Supported 00:24:37.778 Doorbell Buffer Config: Not Supported 00:24:37.778 Get LBA Status Capability: Not Supported 00:24:37.778 Command & Feature Lockdown Capability: Not Supported 00:24:37.778 Abort Command Limit: 1 00:24:37.778 Async Event Request Limit: 4 00:24:37.778 Number of Firmware Slots: N/A 00:24:37.778 Firmware Slot 1 Read-Only: N/A 00:24:37.778 Firmware Activation Without Reset: N/A 00:24:37.778 Multiple Update Detection Support: N/A 00:24:37.778 Firmware Update Granularity: No Information Provided 00:24:37.778 Per-Namespace SMART Log: No 00:24:37.778 Asymmetric Namespace Access Log Page: Not Supported 00:24:37.778 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:37.778 Command Effects Log Page: Not Supported 00:24:37.778 Get Log Page Extended Data: Supported 00:24:37.778 Telemetry Log Pages: Not Supported 00:24:37.778 Persistent Event Log Pages: Not Supported 00:24:37.778 Supported Log Pages Log Page: May Support 00:24:37.778 Commands Supported & Effects Log Page: Not Supported 00:24:37.778 Feature Identifiers & Effects Log Page:May Support 00:24:37.778 NVMe-MI Commands & Effects Log Page: May Support 00:24:37.778 Data Area 4 for Telemetry Log: Not Supported 00:24:37.778 Error Log Page Entries Supported: 128 00:24:37.778 Keep Alive: Not Supported 00:24:37.778 00:24:37.778 NVM Command Set Attributes 00:24:37.778 ========================== 00:24:37.778 Submission Queue Entry Size 00:24:37.778 Max: 1 00:24:37.778 Min: 1 00:24:37.778 Completion Queue Entry Size 00:24:37.778 Max: 1 00:24:37.778 Min: 1 00:24:37.778 Number of Namespaces: 0 00:24:37.778 Compare Command: Not Supported 00:24:37.778 Write Uncorrectable Command: Not Supported 00:24:37.778 Dataset Management Command: Not Supported 00:24:37.778 Write Zeroes Command: Not Supported 00:24:37.778 Set Features Save Field: Not Supported 00:24:37.778 Reservations: Not Supported 00:24:37.778 Timestamp: Not Supported 00:24:37.778 Copy: Not Supported 00:24:37.778 Volatile Write Cache: Not Present 00:24:37.778 Atomic Write Unit (Normal): 1 00:24:37.778 Atomic Write Unit (PFail): 1 00:24:37.778 Atomic Compare & Write Unit: 1 00:24:37.778 Fused Compare & Write: Supported 00:24:37.778 Scatter-Gather List 00:24:37.778 SGL Command Set: Supported 00:24:37.778 SGL Keyed: Supported 00:24:37.778 SGL Bit Bucket Descriptor: Not Supported 00:24:37.778 SGL Metadata Pointer: Not Supported 00:24:37.778 Oversized SGL: Not Supported 00:24:37.778 SGL Metadata Address: Not Supported 00:24:37.778 SGL Offset: Supported 00:24:37.778 Transport SGL Data Block: Not Supported 00:24:37.778 Replay Protected Memory Block: Not Supported 00:24:37.778 00:24:37.778 Firmware Slot Information 00:24:37.778 ========================= 00:24:37.778 Active slot: 0 00:24:37.778 00:24:37.778 00:24:37.778 Error Log 00:24:37.778 ========= 00:24:37.778 00:24:37.778 Active Namespaces 00:24:37.778 ================= 00:24:37.778 Discovery Log Page 00:24:37.778 ================== 00:24:37.778 Generation Counter: 2 00:24:37.778 Number of Records: 2 00:24:37.778 Record Format: 0 00:24:37.778 00:24:37.778 Discovery Log Entry 0 00:24:37.778 ---------------------- 00:24:37.778 Transport Type: 3 (TCP) 00:24:37.778 Address Family: 1 (IPv4) 00:24:37.778 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:37.778 Entry Flags: 00:24:37.778 Duplicate Returned Information: 1 00:24:37.778 Explicit Persistent Connection Support for Discovery: 1 00:24:37.778 Transport Requirements: 00:24:37.778 Secure Channel: Not Required 00:24:37.778 Port ID: 0 (0x0000) 00:24:37.778 Controller ID: 65535 (0xffff) 00:24:37.778 Admin Max SQ Size: 128 00:24:37.778 Transport Service Identifier: 4420 00:24:37.778 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:37.778 Transport Address: 10.0.0.2 00:24:37.778 Discovery Log Entry 1 00:24:37.778 ---------------------- 00:24:37.778 Transport Type: 3 (TCP) 00:24:37.778 Address Family: 1 (IPv4) 00:24:37.778 Subsystem Type: 2 (NVM Subsystem) 00:24:37.778 Entry Flags: 00:24:37.778 Duplicate Returned Information: 0 00:24:37.778 Explicit Persistent Connection Support for Discovery: 0 00:24:37.778 Transport Requirements: 00:24:37.778 Secure Channel: Not Required 00:24:37.778 Port ID: 0 (0x0000) 00:24:37.778 Controller ID: 65535 (0xffff) 00:24:37.778 Admin Max SQ Size: 128 00:24:37.778 Transport Service Identifier: 4420 00:24:37.778 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:37.778 Transport Address: 10.0.0.2 [2024-11-20 17:07:29.767298] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:37.778 [2024-11-20 17:07:29.767313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471100) on tqpair=0x140f690 00:24:37.778 [2024-11-20 17:07:29.767321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.778 [2024-11-20 17:07:29.767327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471280) on tqpair=0x140f690 00:24:37.778 [2024-11-20 17:07:29.767331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.778 [2024-11-20 17:07:29.767336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471400) on tqpair=0x140f690 00:24:37.778 [2024-11-20 17:07:29.767341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.778 [2024-11-20 17:07:29.767346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.778 [2024-11-20 17:07:29.767350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.778 [2024-11-20 17:07:29.767363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.778 [2024-11-20 17:07:29.767380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.778 [2024-11-20 17:07:29.767396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.778 [2024-11-20 17:07:29.767566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.778 [2024-11-20 17:07:29.767573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.778 [2024-11-20 17:07:29.767576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.778 [2024-11-20 17:07:29.767588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.778 [2024-11-20 17:07:29.767602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.778 [2024-11-20 17:07:29.767616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.778 [2024-11-20 17:07:29.767815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.778 [2024-11-20 17:07:29.767822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.778 [2024-11-20 17:07:29.767825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.778 [2024-11-20 17:07:29.767834] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:37.778 [2024-11-20 17:07:29.767840] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:37.778 [2024-11-20 17:07:29.767850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.778 [2024-11-20 17:07:29.767854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.767858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.767865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.767876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.768117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.768126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.768130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.768146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.768168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.768179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.768389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.768395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.768399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.768412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.768427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.768437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.768672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.768679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.768682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.768696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.768710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.768720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.768973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.768979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.768983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.768987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.768997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.769011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.769021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.769227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.769233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.769239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.769253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.769268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.769278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.769490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.769496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.769499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.769513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.769527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.769538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.769780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.769786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.769789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.769803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.769811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.769817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.769828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.770032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.770039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.770042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.770057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.770071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.770081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.770286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.770293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.770296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.770313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.770327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.770338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.770518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.770524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.770528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.770542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.770556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.770566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.770788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.770794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.770797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.770811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.770819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.779 [2024-11-20 17:07:29.770825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.779 [2024-11-20 17:07:29.770835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.779 [2024-11-20 17:07:29.771041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.779 [2024-11-20 17:07:29.771047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.779 [2024-11-20 17:07:29.771051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.779 [2024-11-20 17:07:29.771055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.779 [2024-11-20 17:07:29.771064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.771068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.771072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x140f690) 00:24:37.780 [2024-11-20 17:07:29.771079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.771089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1471580, cid 3, qid 0 00:24:37.780 [2024-11-20 17:07:29.775169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.775177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.775181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.775185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1471580) on tqpair=0x140f690 00:24:37.780 [2024-11-20 17:07:29.775196] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:37.780 00:24:37.780 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:37.780 [2024-11-20 17:07:29.821569] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:37.780 [2024-11-20 17:07:29.821619] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058680 ] 00:24:37.780 [2024-11-20 17:07:29.878667] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:37.780 [2024-11-20 17:07:29.878738] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:37.780 [2024-11-20 17:07:29.878744] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:37.780 [2024-11-20 17:07:29.878763] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:37.780 [2024-11-20 17:07:29.878775] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:37.780 [2024-11-20 17:07:29.879484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:37.780 [2024-11-20 17:07:29.879523] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15c2690 0 00:24:37.780 [2024-11-20 17:07:29.890180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:37.780 [2024-11-20 17:07:29.890195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:37.780 [2024-11-20 17:07:29.890199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:37.780 [2024-11-20 17:07:29.890203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:37.780 [2024-11-20 17:07:29.890239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.890245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.890249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.890262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:37.780 [2024-11-20 17:07:29.890285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.898175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.898185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.898189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.780 [2024-11-20 17:07:29.898204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:37.780 [2024-11-20 17:07:29.898211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:37.780 [2024-11-20 17:07:29.898217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:37.780 [2024-11-20 17:07:29.898231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.898253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.898270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.898481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.898487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.898491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.780 [2024-11-20 17:07:29.898500] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:37.780 [2024-11-20 17:07:29.898508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:37.780 [2024-11-20 17:07:29.898515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.898530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.898541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.898759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.898765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.898769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.780 [2024-11-20 17:07:29.898778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:37.780 [2024-11-20 17:07:29.898787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:37.780 [2024-11-20 17:07:29.898793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.898801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.898808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.898818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.899022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.899029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.899032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.780 [2024-11-20 17:07:29.899041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:37.780 [2024-11-20 17:07:29.899051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.899065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.899076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.899263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.899270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.899274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.780 [2024-11-20 17:07:29.899283] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:37.780 [2024-11-20 17:07:29.899288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:37.780 [2024-11-20 17:07:29.899296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:37.780 [2024-11-20 17:07:29.899405] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:37.780 [2024-11-20 17:07:29.899410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:37.780 [2024-11-20 17:07:29.899418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.899432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.899443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.899638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.780 [2024-11-20 17:07:29.899644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.780 [2024-11-20 17:07:29.899648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.780 [2024-11-20 17:07:29.899656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:37.780 [2024-11-20 17:07:29.899666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.780 [2024-11-20 17:07:29.899673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.780 [2024-11-20 17:07:29.899680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.780 [2024-11-20 17:07:29.899690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.780 [2024-11-20 17:07:29.899860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.781 [2024-11-20 17:07:29.899867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.781 [2024-11-20 17:07:29.899870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.899874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.781 [2024-11-20 17:07:29.899879] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:37.781 [2024-11-20 17:07:29.899883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.899891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:37.781 [2024-11-20 17:07:29.899905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.899917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.899920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.899928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.781 [2024-11-20 17:07:29.899938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.781 [2024-11-20 17:07:29.900213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.781 [2024-11-20 17:07:29.900221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.781 [2024-11-20 17:07:29.900225] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.900229] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=4096, cccid=0 00:24:37.781 [2024-11-20 17:07:29.900233] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624100) on tqpair(0x15c2690): expected_datao=0, payload_size=4096 00:24:37.781 [2024-11-20 17:07:29.900238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.900262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.900266] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.781 [2024-11-20 17:07:29.941356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.781 [2024-11-20 17:07:29.941361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.781 [2024-11-20 17:07:29.941376] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:37.781 [2024-11-20 17:07:29.941382] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:37.781 [2024-11-20 17:07:29.941387] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:37.781 [2024-11-20 17:07:29.941401] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:37.781 [2024-11-20 17:07:29.941406] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:37.781 [2024-11-20 17:07:29.941413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.941425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.941433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.941451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:37.781 [2024-11-20 17:07:29.941465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.781 [2024-11-20 17:07:29.941609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.781 [2024-11-20 17:07:29.941616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.781 [2024-11-20 17:07:29.941619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:37.781 [2024-11-20 17:07:29.941631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.941650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.781 [2024-11-20 17:07:29.941657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.941672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.781 [2024-11-20 17:07:29.941678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.941693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.781 [2024-11-20 17:07:29.941700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.941715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.781 [2024-11-20 17:07:29.941720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.941729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.941736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.941740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.941748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.781 [2024-11-20 17:07:29.941760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624100, cid 0, qid 0 00:24:37.781 [2024-11-20 17:07:29.941766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624280, cid 1, qid 0 00:24:37.781 [2024-11-20 17:07:29.941772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624400, cid 2, qid 0 00:24:37.781 [2024-11-20 17:07:29.941777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:37.781 [2024-11-20 17:07:29.941783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:37.781 [2024-11-20 17:07:29.942015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.781 [2024-11-20 17:07:29.942022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.781 [2024-11-20 17:07:29.942026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.942030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:37.781 [2024-11-20 17:07:29.942038] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:37.781 [2024-11-20 17:07:29.942044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.942054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.942062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.942071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.942075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.942079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.942087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:37.781 [2024-11-20 17:07:29.942098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:37.781 [2024-11-20 17:07:29.946169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.781 [2024-11-20 17:07:29.946178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.781 [2024-11-20 17:07:29.946182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.946186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:37.781 [2024-11-20 17:07:29.946257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.946269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:37.781 [2024-11-20 17:07:29.946278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.946282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:37.781 [2024-11-20 17:07:29.946289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.781 [2024-11-20 17:07:29.946303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:37.781 [2024-11-20 17:07:29.946502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.781 [2024-11-20 17:07:29.946509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.781 [2024-11-20 17:07:29.946513] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.946518] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=4096, cccid=4 00:24:37.781 [2024-11-20 17:07:29.946524] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624700) on tqpair(0x15c2690): expected_datao=0, payload_size=4096 00:24:37.781 [2024-11-20 17:07:29.946529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.781 [2024-11-20 17:07:29.946537] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.946542] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.946707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.782 [2024-11-20 17:07:29.946713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.782 [2024-11-20 17:07:29.946717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.946721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:37.782 [2024-11-20 17:07:29.946734] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:37.782 [2024-11-20 17:07:29.946745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.946756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.946764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.946768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:37.782 [2024-11-20 17:07:29.946775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.782 [2024-11-20 17:07:29.946787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:37.782 [2024-11-20 17:07:29.947010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.782 [2024-11-20 17:07:29.947017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.782 [2024-11-20 17:07:29.947021] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947025] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=4096, cccid=4 00:24:37.782 [2024-11-20 17:07:29.947029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624700) on tqpair(0x15c2690): expected_datao=0, payload_size=4096 00:24:37.782 [2024-11-20 17:07:29.947033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947044] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.782 [2024-11-20 17:07:29.947209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.782 [2024-11-20 17:07:29.947212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:37.782 [2024-11-20 17:07:29.947231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:37.782 [2024-11-20 17:07:29.947259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.782 [2024-11-20 17:07:29.947270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:37.782 [2024-11-20 17:07:29.947512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:37.782 [2024-11-20 17:07:29.947519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:37.782 [2024-11-20 17:07:29.947523] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947526] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=4096, cccid=4 00:24:37.782 [2024-11-20 17:07:29.947531] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624700) on tqpair(0x15c2690): expected_datao=0, payload_size=4096 00:24:37.782 [2024-11-20 17:07:29.947535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947542] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947546] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:37.782 [2024-11-20 17:07:29.947711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:37.782 [2024-11-20 17:07:29.947714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:37.782 [2024-11-20 17:07:29.947718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:37.782 [2024-11-20 17:07:29.947726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947770] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:37.782 [2024-11-20 17:07:29.947775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:37.782 [2024-11-20 17:07:29.947780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:38.046 [2024-11-20 17:07:29.947799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.947806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:38.046 [2024-11-20 17:07:29.947816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.046 [2024-11-20 17:07:29.947827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.947834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.947838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15c2690) 00:24:38.046 [2024-11-20 17:07:29.947844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.046 [2024-11-20 17:07:29.947858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:38.046 [2024-11-20 17:07:29.947863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624880, cid 5, qid 0 00:24:38.046 [2024-11-20 17:07:29.948097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.046 [2024-11-20 17:07:29.948107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.046 [2024-11-20 17:07:29.948111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.948115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:38.046 [2024-11-20 17:07:29.948122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.046 [2024-11-20 17:07:29.948128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.046 [2024-11-20 17:07:29.948132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.948136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624880) on tqpair=0x15c2690 00:24:38.046 [2024-11-20 17:07:29.948145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.948149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15c2690) 00:24:38.046 [2024-11-20 17:07:29.948155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.046 [2024-11-20 17:07:29.948175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624880, cid 5, qid 0 00:24:38.046 [2024-11-20 17:07:29.948347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.046 [2024-11-20 17:07:29.948353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.046 [2024-11-20 17:07:29.948357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.948361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624880) on tqpair=0x15c2690 00:24:38.046 [2024-11-20 17:07:29.948370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.046 [2024-11-20 17:07:29.948374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15c2690) 00:24:38.047 [2024-11-20 17:07:29.948381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.047 [2024-11-20 17:07:29.948394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624880, cid 5, qid 0 00:24:38.047 [2024-11-20 17:07:29.948595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.047 [2024-11-20 17:07:29.948604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.047 [2024-11-20 17:07:29.948608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624880) on tqpair=0x15c2690 00:24:38.047 [2024-11-20 17:07:29.948622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15c2690) 00:24:38.047 [2024-11-20 17:07:29.948632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.047 [2024-11-20 17:07:29.948642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624880, cid 5, qid 0 00:24:38.047 [2024-11-20 17:07:29.948831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.047 [2024-11-20 17:07:29.948837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.047 [2024-11-20 17:07:29.948841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624880) on tqpair=0x15c2690 00:24:38.047 [2024-11-20 17:07:29.948860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15c2690) 00:24:38.047 [2024-11-20 17:07:29.948871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.047 [2024-11-20 17:07:29.948879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c2690) 00:24:38.047 [2024-11-20 17:07:29.948889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.047 [2024-11-20 17:07:29.948896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15c2690) 00:24:38.047 [2024-11-20 17:07:29.948906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.047 [2024-11-20 17:07:29.948914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.948918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15c2690) 00:24:38.047 [2024-11-20 17:07:29.948924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.047 [2024-11-20 17:07:29.948936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624880, cid 5, qid 0 00:24:38.047 [2024-11-20 17:07:29.948941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624700, cid 4, qid 0 00:24:38.047 [2024-11-20 17:07:29.948946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624a00, cid 6, qid 0 00:24:38.047 [2024-11-20 17:07:29.948951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624b80, cid 7, qid 0 00:24:38.047 [2024-11-20 17:07:29.949230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.047 [2024-11-20 17:07:29.949238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.047 [2024-11-20 17:07:29.949242] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949245] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=8192, cccid=5 00:24:38.047 [2024-11-20 17:07:29.949253] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624880) on tqpair(0x15c2690): expected_datao=0, payload_size=8192 00:24:38.047 [2024-11-20 17:07:29.949257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949355] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949359] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.047 [2024-11-20 17:07:29.949371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.047 [2024-11-20 17:07:29.949374] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949378] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=512, cccid=4 00:24:38.047 [2024-11-20 17:07:29.949382] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624700) on tqpair(0x15c2690): expected_datao=0, payload_size=512 00:24:38.047 [2024-11-20 17:07:29.949387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.047 [2024-11-20 17:07:29.949408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.047 [2024-11-20 17:07:29.949412] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=512, cccid=6 00:24:38.047 [2024-11-20 17:07:29.949420] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624a00) on tqpair(0x15c2690): expected_datao=0, payload_size=512 00:24:38.047 [2024-11-20 17:07:29.949424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949431] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949434] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.047 [2024-11-20 17:07:29.949446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.047 [2024-11-20 17:07:29.949449] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949453] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c2690): datao=0, datal=4096, cccid=7 00:24:38.047 [2024-11-20 17:07:29.949457] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1624b80) on tqpair(0x15c2690): expected_datao=0, payload_size=4096 00:24:38.047 [2024-11-20 17:07:29.949461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949469] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949472] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.047 [2024-11-20 17:07:29.949488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.047 [2024-11-20 17:07:29.949492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624880) on tqpair=0x15c2690 00:24:38.047 [2024-11-20 17:07:29.949512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.047 [2024-11-20 17:07:29.949518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.047 [2024-11-20 17:07:29.949522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624700) on tqpair=0x15c2690 00:24:38.047 [2024-11-20 17:07:29.949536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.047 [2024-11-20 17:07:29.949542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.047 [2024-11-20 17:07:29.949546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624a00) on tqpair=0x15c2690 00:24:38.047 [2024-11-20 17:07:29.949559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.047 [2024-11-20 17:07:29.949565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.047 [2024-11-20 17:07:29.949568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.047 [2024-11-20 17:07:29.949572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624b80) on tqpair=0x15c2690 00:24:38.047 ===================================================== 00:24:38.047 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.047 ===================================================== 00:24:38.047 Controller Capabilities/Features 00:24:38.047 ================================ 00:24:38.047 Vendor ID: 8086 00:24:38.048 Subsystem Vendor ID: 8086 00:24:38.048 Serial Number: SPDK00000000000001 00:24:38.048 Model Number: SPDK bdev Controller 00:24:38.048 Firmware Version: 25.01 00:24:38.048 Recommended Arb Burst: 6 00:24:38.048 IEEE OUI Identifier: e4 d2 5c 00:24:38.048 Multi-path I/O 00:24:38.048 May have multiple subsystem ports: Yes 00:24:38.048 May have multiple controllers: Yes 00:24:38.048 Associated with SR-IOV VF: No 00:24:38.048 Max Data Transfer Size: 131072 00:24:38.048 Max Number of Namespaces: 32 00:24:38.048 Max Number of I/O Queues: 127 00:24:38.048 NVMe Specification Version (VS): 1.3 00:24:38.048 NVMe Specification Version (Identify): 1.3 00:24:38.048 Maximum Queue Entries: 128 00:24:38.048 Contiguous Queues Required: Yes 00:24:38.048 Arbitration Mechanisms Supported 00:24:38.048 Weighted Round Robin: Not Supported 00:24:38.048 Vendor Specific: Not Supported 00:24:38.048 Reset Timeout: 15000 ms 00:24:38.048 Doorbell Stride: 4 bytes 00:24:38.048 NVM Subsystem Reset: Not Supported 00:24:38.048 Command Sets Supported 00:24:38.048 NVM Command Set: Supported 00:24:38.048 Boot Partition: Not Supported 00:24:38.048 Memory Page Size Minimum: 4096 bytes 00:24:38.048 Memory Page Size Maximum: 4096 bytes 00:24:38.048 Persistent Memory Region: Not Supported 00:24:38.048 Optional Asynchronous Events Supported 00:24:38.048 Namespace Attribute Notices: Supported 00:24:38.048 Firmware Activation Notices: Not Supported 00:24:38.048 ANA Change Notices: Not Supported 00:24:38.048 PLE Aggregate Log Change Notices: Not Supported 00:24:38.048 LBA Status Info Alert Notices: Not Supported 00:24:38.048 EGE Aggregate Log Change Notices: Not Supported 00:24:38.048 Normal NVM Subsystem Shutdown event: Not Supported 00:24:38.048 Zone Descriptor Change Notices: Not Supported 00:24:38.048 Discovery Log Change Notices: Not Supported 00:24:38.048 Controller Attributes 00:24:38.048 128-bit Host Identifier: Supported 00:24:38.048 Non-Operational Permissive Mode: Not Supported 00:24:38.048 NVM Sets: Not Supported 00:24:38.048 Read Recovery Levels: Not Supported 00:24:38.048 Endurance Groups: Not Supported 00:24:38.048 Predictable Latency Mode: Not Supported 00:24:38.048 Traffic Based Keep ALive: Not Supported 00:24:38.048 Namespace Granularity: Not Supported 00:24:38.048 SQ Associations: Not Supported 00:24:38.048 UUID List: Not Supported 00:24:38.048 Multi-Domain Subsystem: Not Supported 00:24:38.048 Fixed Capacity Management: Not Supported 00:24:38.048 Variable Capacity Management: Not Supported 00:24:38.048 Delete Endurance Group: Not Supported 00:24:38.048 Delete NVM Set: Not Supported 00:24:38.048 Extended LBA Formats Supported: Not Supported 00:24:38.048 Flexible Data Placement Supported: Not Supported 00:24:38.048 00:24:38.048 Controller Memory Buffer Support 00:24:38.048 ================================ 00:24:38.048 Supported: No 00:24:38.048 00:24:38.048 Persistent Memory Region Support 00:24:38.048 ================================ 00:24:38.048 Supported: No 00:24:38.048 00:24:38.048 Admin Command Set Attributes 00:24:38.048 ============================ 00:24:38.048 Security Send/Receive: Not Supported 00:24:38.048 Format NVM: Not Supported 00:24:38.048 Firmware Activate/Download: Not Supported 00:24:38.048 Namespace Management: Not Supported 00:24:38.048 Device Self-Test: Not Supported 00:24:38.048 Directives: Not Supported 00:24:38.048 NVMe-MI: Not Supported 00:24:38.048 Virtualization Management: Not Supported 00:24:38.048 Doorbell Buffer Config: Not Supported 00:24:38.048 Get LBA Status Capability: Not Supported 00:24:38.048 Command & Feature Lockdown Capability: Not Supported 00:24:38.048 Abort Command Limit: 4 00:24:38.048 Async Event Request Limit: 4 00:24:38.048 Number of Firmware Slots: N/A 00:24:38.048 Firmware Slot 1 Read-Only: N/A 00:24:38.048 Firmware Activation Without Reset: N/A 00:24:38.048 Multiple Update Detection Support: N/A 00:24:38.048 Firmware Update Granularity: No Information Provided 00:24:38.048 Per-Namespace SMART Log: No 00:24:38.048 Asymmetric Namespace Access Log Page: Not Supported 00:24:38.048 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:38.048 Command Effects Log Page: Supported 00:24:38.048 Get Log Page Extended Data: Supported 00:24:38.048 Telemetry Log Pages: Not Supported 00:24:38.048 Persistent Event Log Pages: Not Supported 00:24:38.048 Supported Log Pages Log Page: May Support 00:24:38.048 Commands Supported & Effects Log Page: Not Supported 00:24:38.048 Feature Identifiers & Effects Log Page:May Support 00:24:38.048 NVMe-MI Commands & Effects Log Page: May Support 00:24:38.048 Data Area 4 for Telemetry Log: Not Supported 00:24:38.048 Error Log Page Entries Supported: 128 00:24:38.048 Keep Alive: Supported 00:24:38.048 Keep Alive Granularity: 10000 ms 00:24:38.048 00:24:38.048 NVM Command Set Attributes 00:24:38.048 ========================== 00:24:38.048 Submission Queue Entry Size 00:24:38.048 Max: 64 00:24:38.048 Min: 64 00:24:38.048 Completion Queue Entry Size 00:24:38.048 Max: 16 00:24:38.048 Min: 16 00:24:38.048 Number of Namespaces: 32 00:24:38.048 Compare Command: Supported 00:24:38.048 Write Uncorrectable Command: Not Supported 00:24:38.048 Dataset Management Command: Supported 00:24:38.048 Write Zeroes Command: Supported 00:24:38.048 Set Features Save Field: Not Supported 00:24:38.048 Reservations: Supported 00:24:38.048 Timestamp: Not Supported 00:24:38.048 Copy: Supported 00:24:38.048 Volatile Write Cache: Present 00:24:38.048 Atomic Write Unit (Normal): 1 00:24:38.048 Atomic Write Unit (PFail): 1 00:24:38.048 Atomic Compare & Write Unit: 1 00:24:38.048 Fused Compare & Write: Supported 00:24:38.048 Scatter-Gather List 00:24:38.048 SGL Command Set: Supported 00:24:38.048 SGL Keyed: Supported 00:24:38.048 SGL Bit Bucket Descriptor: Not Supported 00:24:38.048 SGL Metadata Pointer: Not Supported 00:24:38.048 Oversized SGL: Not Supported 00:24:38.048 SGL Metadata Address: Not Supported 00:24:38.048 SGL Offset: Supported 00:24:38.048 Transport SGL Data Block: Not Supported 00:24:38.048 Replay Protected Memory Block: Not Supported 00:24:38.048 00:24:38.048 Firmware Slot Information 00:24:38.048 ========================= 00:24:38.048 Active slot: 1 00:24:38.048 Slot 1 Firmware Revision: 25.01 00:24:38.048 00:24:38.048 00:24:38.048 Commands Supported and Effects 00:24:38.048 ============================== 00:24:38.048 Admin Commands 00:24:38.049 -------------- 00:24:38.049 Get Log Page (02h): Supported 00:24:38.049 Identify (06h): Supported 00:24:38.049 Abort (08h): Supported 00:24:38.049 Set Features (09h): Supported 00:24:38.049 Get Features (0Ah): Supported 00:24:38.049 Asynchronous Event Request (0Ch): Supported 00:24:38.049 Keep Alive (18h): Supported 00:24:38.049 I/O Commands 00:24:38.049 ------------ 00:24:38.049 Flush (00h): Supported LBA-Change 00:24:38.049 Write (01h): Supported LBA-Change 00:24:38.049 Read (02h): Supported 00:24:38.049 Compare (05h): Supported 00:24:38.049 Write Zeroes (08h): Supported LBA-Change 00:24:38.049 Dataset Management (09h): Supported LBA-Change 00:24:38.049 Copy (19h): Supported LBA-Change 00:24:38.049 00:24:38.049 Error Log 00:24:38.049 ========= 00:24:38.049 00:24:38.049 Arbitration 00:24:38.049 =========== 00:24:38.049 Arbitration Burst: 1 00:24:38.049 00:24:38.049 Power Management 00:24:38.049 ================ 00:24:38.049 Number of Power States: 1 00:24:38.049 Current Power State: Power State #0 00:24:38.049 Power State #0: 00:24:38.049 Max Power: 0.00 W 00:24:38.049 Non-Operational State: Operational 00:24:38.049 Entry Latency: Not Reported 00:24:38.049 Exit Latency: Not Reported 00:24:38.049 Relative Read Throughput: 0 00:24:38.049 Relative Read Latency: 0 00:24:38.049 Relative Write Throughput: 0 00:24:38.049 Relative Write Latency: 0 00:24:38.049 Idle Power: Not Reported 00:24:38.049 Active Power: Not Reported 00:24:38.049 Non-Operational Permissive Mode: Not Supported 00:24:38.049 00:24:38.049 Health Information 00:24:38.049 ================== 00:24:38.049 Critical Warnings: 00:24:38.049 Available Spare Space: OK 00:24:38.049 Temperature: OK 00:24:38.049 Device Reliability: OK 00:24:38.049 Read Only: No 00:24:38.049 Volatile Memory Backup: OK 00:24:38.049 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:38.049 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:38.049 Available Spare: 0% 00:24:38.049 Available Spare Threshold: 0% 00:24:38.049 Life Percentage Used:[2024-11-20 17:07:29.949675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.949681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.949688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 17:07:29.949700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624b80, cid 7, qid 0 00:24:38.049 [2024-11-20 17:07:29.949893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 17:07:29.949899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 17:07:29.949903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.949907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624b80) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.949942] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:38.049 [2024-11-20 17:07:29.949952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624100) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.949959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.049 [2024-11-20 17:07:29.949964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624280) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.949969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.049 [2024-11-20 17:07:29.949974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624400) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.949978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.049 [2024-11-20 17:07:29.949983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.949988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.049 [2024-11-20 17:07:29.949996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.950000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.950004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.950011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 17:07:29.950022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.049 [2024-11-20 17:07:29.954177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 17:07:29.954187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 17:07:29.954191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.954202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.954220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 17:07:29.954237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.049 [2024-11-20 17:07:29.954459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 17:07:29.954465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 17:07:29.954469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.954478] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:38.049 [2024-11-20 17:07:29.954482] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:38.049 [2024-11-20 17:07:29.954492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.954506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 17:07:29.954517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.049 [2024-11-20 17:07:29.954722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 17:07:29.954729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 17:07:29.954732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.954746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.954754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.954761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 17:07:29.954771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.049 [2024-11-20 17:07:29.955018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 17:07:29.955025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 17:07:29.955029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.955033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.955044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.955048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.955051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.955058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 17:07:29.955068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.049 [2024-11-20 17:07:29.955277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 17:07:29.955284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 17:07:29.955287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.955291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.049 [2024-11-20 17:07:29.955301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.955305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 17:07:29.955313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.049 [2024-11-20 17:07:29.955320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.955331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.955547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.955554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.955557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.955561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.955571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.955575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.955578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.955585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.955595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.955775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.955782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.955786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.955789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.955800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.955804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.955807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.955814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.955824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.956017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.956023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.956027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.956041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.956056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.956067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.956257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.956264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.956267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.956281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.956298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.956309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.956504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.956511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.956515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.956529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.956547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.956557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.956750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.956758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.956761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.956781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.956793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.956801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.956811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.956997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.957004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.957008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.957023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.957045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.957056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.957246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.957253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.957256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.957270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.957292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.957307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.957469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.957476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.957480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.050 [2024-11-20 17:07:29.957495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.050 [2024-11-20 17:07:29.957509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 17:07:29.957519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.050 [2024-11-20 17:07:29.957701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 17:07:29.957707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 17:07:29.957711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 17:07:29.957715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.051 [2024-11-20 17:07:29.957725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.957729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.957732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.051 [2024-11-20 17:07:29.957739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.051 [2024-11-20 17:07:29.957749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.051 [2024-11-20 17:07:29.957952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.051 [2024-11-20 17:07:29.957958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.051 [2024-11-20 17:07:29.957962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.957965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.051 [2024-11-20 17:07:29.957975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.957979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.957983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.051 [2024-11-20 17:07:29.957989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.051 [2024-11-20 17:07:29.958000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.051 [2024-11-20 17:07:29.962173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.051 [2024-11-20 17:07:29.962182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.051 [2024-11-20 17:07:29.962186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.962190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.051 [2024-11-20 17:07:29.962200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.962204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.962208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c2690) 00:24:38.051 [2024-11-20 17:07:29.962214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.051 [2024-11-20 17:07:29.962229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1624580, cid 3, qid 0 00:24:38.051 [2024-11-20 17:07:29.962424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.051 [2024-11-20 17:07:29.962430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.051 [2024-11-20 17:07:29.962434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.051 [2024-11-20 17:07:29.962438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1624580) on tqpair=0x15c2690 00:24:38.051 [2024-11-20 17:07:29.962446] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:38.051 0% 00:24:38.051 Data Units Read: 0 00:24:38.051 Data Units Written: 0 00:24:38.051 Host Read Commands: 0 00:24:38.051 Host Write Commands: 0 00:24:38.051 Controller Busy Time: 0 minutes 00:24:38.051 Power Cycles: 0 00:24:38.051 Power On Hours: 0 hours 00:24:38.051 Unsafe Shutdowns: 0 00:24:38.051 Unrecoverable Media Errors: 0 00:24:38.051 Lifetime Error Log Entries: 0 00:24:38.051 Warning Temperature Time: 0 minutes 00:24:38.051 Critical Temperature Time: 0 minutes 00:24:38.051 00:24:38.051 Number of Queues 00:24:38.051 ================ 00:24:38.051 Number of I/O Submission Queues: 127 00:24:38.051 Number of I/O Completion Queues: 127 00:24:38.051 00:24:38.051 Active Namespaces 00:24:38.051 ================= 00:24:38.051 Namespace ID:1 00:24:38.051 Error Recovery Timeout: Unlimited 00:24:38.051 Command Set Identifier: NVM (00h) 00:24:38.051 Deallocate: Supported 00:24:38.051 Deallocated/Unwritten Error: Not Supported 00:24:38.051 Deallocated Read Value: Unknown 00:24:38.051 Deallocate in Write Zeroes: Not Supported 00:24:38.051 Deallocated Guard Field: 0xFFFF 00:24:38.051 Flush: Supported 00:24:38.051 Reservation: Supported 00:24:38.051 Namespace Sharing Capabilities: Multiple Controllers 00:24:38.051 Size (in LBAs): 131072 (0GiB) 00:24:38.051 Capacity (in LBAs): 131072 (0GiB) 00:24:38.051 Utilization (in LBAs): 131072 (0GiB) 00:24:38.051 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:38.051 EUI64: ABCDEF0123456789 00:24:38.051 UUID: 10dfe82b-7c9d-4ba4-ad8d-2030ee65e08a 00:24:38.051 Thin Provisioning: Not Supported 00:24:38.051 Per-NS Atomic Units: Yes 00:24:38.051 Atomic Boundary Size (Normal): 0 00:24:38.051 Atomic Boundary Size (PFail): 0 00:24:38.051 Atomic Boundary Offset: 0 00:24:38.051 Maximum Single Source Range Length: 65535 00:24:38.051 Maximum Copy Length: 65535 00:24:38.051 Maximum Source Range Count: 1 00:24:38.051 NGUID/EUI64 Never Reused: No 00:24:38.051 Namespace Write Protected: No 00:24:38.051 Number of LBA Formats: 1 00:24:38.051 Current LBA Format: LBA Format #00 00:24:38.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:38.051 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.051 17:07:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.051 rmmod nvme_tcp 00:24:38.051 rmmod nvme_fabrics 00:24:38.051 rmmod nvme_keyring 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2058320 ']' 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2058320 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2058320 ']' 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2058320 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2058320 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2058320' 00:24:38.051 killing process with pid 2058320 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2058320 00:24:38.051 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2058320 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.314 17:07:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.861 00:24:40.861 real 0m11.735s 00:24:40.861 user 0m8.926s 00:24:40.861 sys 0m6.155s 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.861 ************************************ 00:24:40.861 END TEST nvmf_identify 00:24:40.861 ************************************ 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.861 ************************************ 00:24:40.861 START TEST nvmf_perf 00:24:40.861 ************************************ 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:40.861 * Looking for test storage... 00:24:40.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:40.861 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.862 --rc genhtml_branch_coverage=1 00:24:40.862 --rc genhtml_function_coverage=1 00:24:40.862 --rc genhtml_legend=1 00:24:40.862 --rc geninfo_all_blocks=1 00:24:40.862 --rc geninfo_unexecuted_blocks=1 00:24:40.862 00:24:40.862 ' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.862 --rc genhtml_branch_coverage=1 00:24:40.862 --rc genhtml_function_coverage=1 00:24:40.862 --rc genhtml_legend=1 00:24:40.862 --rc geninfo_all_blocks=1 00:24:40.862 --rc geninfo_unexecuted_blocks=1 00:24:40.862 00:24:40.862 ' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.862 --rc genhtml_branch_coverage=1 00:24:40.862 --rc genhtml_function_coverage=1 00:24:40.862 --rc genhtml_legend=1 00:24:40.862 --rc geninfo_all_blocks=1 00:24:40.862 --rc geninfo_unexecuted_blocks=1 00:24:40.862 00:24:40.862 ' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.862 --rc genhtml_branch_coverage=1 00:24:40.862 --rc genhtml_function_coverage=1 00:24:40.862 --rc genhtml_legend=1 00:24:40.862 --rc geninfo_all_blocks=1 00:24:40.862 --rc geninfo_unexecuted_blocks=1 00:24:40.862 00:24:40.862 ' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.862 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.863 17:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.011 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:49.012 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:49.012 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:49.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:49.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.012 17:07:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:24:49.012 00:24:49.012 --- 10.0.0.2 ping statistics --- 00:24:49.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.012 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:24:49.012 00:24:49.012 --- 10.0.0.1 ping statistics --- 00:24:49.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.012 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2063010 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2063010 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:49.012 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2063010 ']' 00:24:49.013 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.013 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.013 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.013 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.013 17:07:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.013 [2024-11-20 17:07:40.394268] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:24:49.013 [2024-11-20 17:07:40.394334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.013 [2024-11-20 17:07:40.493152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.013 [2024-11-20 17:07:40.546413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.013 [2024-11-20 17:07:40.546467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.013 [2024-11-20 17:07:40.546475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.013 [2024-11-20 17:07:40.546483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.013 [2024-11-20 17:07:40.546489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.013 [2024-11-20 17:07:40.548562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.013 [2024-11-20 17:07:40.548723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.013 [2024-11-20 17:07:40.548890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.013 [2024-11-20 17:07:40.548890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:49.275 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:49.846 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:49.846 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:49.846 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:49.846 17:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:50.107 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:50.107 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:50.107 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:50.107 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:50.107 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:50.369 [2024-11-20 17:07:42.388900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.369 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.629 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:50.629 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.890 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:50.890 17:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:50.890 17:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.151 [2024-11-20 17:07:43.176190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.151 17:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.411 17:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:51.411 17:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:51.411 17:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:51.411 17:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:52.797 Initializing NVMe Controllers 00:24:52.797 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:52.797 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:52.797 Initialization complete. Launching workers. 00:24:52.797 ======================================================== 00:24:52.797 Latency(us) 00:24:52.797 Device Information : IOPS MiB/s Average min max 00:24:52.797 PCIE (0000:65:00.0) NSID 1 from core 0: 78587.20 306.98 406.34 13.15 4939.79 00:24:52.797 ======================================================== 00:24:52.797 Total : 78587.20 306.98 406.34 13.15 4939.79 00:24:52.797 00:24:52.797 17:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.181 Initializing NVMe Controllers 00:24:54.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:54.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:54.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:54.181 Initialization complete. Launching workers. 00:24:54.181 ======================================================== 00:24:54.181 Latency(us) 00:24:54.181 Device Information : IOPS MiB/s Average min max 00:24:54.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.00 0.45 8969.71 235.49 45653.04 00:24:54.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18301.68 7964.20 50851.20 00:24:54.181 ======================================================== 00:24:54.181 Total : 169.00 0.66 12006.74 235.49 50851.20 00:24:54.181 00:24:54.181 17:07:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.566 Initializing NVMe Controllers 00:24:55.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:55.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:55.566 Initialization complete. Launching workers. 00:24:55.566 ======================================================== 00:24:55.566 Latency(us) 00:24:55.566 Device Information : IOPS MiB/s Average min max 00:24:55.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12050.16 47.07 2686.02 432.43 44659.82 00:24:55.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3656.74 14.28 8809.54 7267.83 22996.47 00:24:55.566 ======================================================== 00:24:55.566 Total : 15706.90 61.36 4111.64 432.43 44659.82 00:24:55.566 00:24:55.566 17:07:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:55.566 17:07:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:55.566 17:07:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.110 Initializing NVMe Controllers 00:24:58.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.110 Controller IO queue size 128, less than required. 00:24:58.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.110 Controller IO queue size 128, less than required. 00:24:58.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:58.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:58.110 Initialization complete. Launching workers. 00:24:58.110 ======================================================== 00:24:58.110 Latency(us) 00:24:58.110 Device Information : IOPS MiB/s Average min max 00:24:58.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1790.42 447.60 72610.64 39385.83 128976.49 00:24:58.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 626.77 156.69 216337.19 64015.84 316220.69 00:24:58.111 ======================================================== 00:24:58.111 Total : 2417.19 604.30 109878.58 39385.83 316220.69 00:24:58.111 00:24:58.111 17:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:58.111 No valid NVMe controllers or AIO or URING devices found 00:24:58.111 Initializing NVMe Controllers 00:24:58.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.111 Controller IO queue size 128, less than required. 00:24:58.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.111 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:58.111 Controller IO queue size 128, less than required. 00:24:58.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.111 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:58.111 WARNING: Some requested NVMe devices were skipped 00:24:58.111 17:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:00.656 Initializing NVMe Controllers 00:25:00.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:00.656 Controller IO queue size 128, less than required. 00:25:00.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.656 Controller IO queue size 128, less than required. 00:25:00.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:00.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:00.656 Initialization complete. Launching workers. 00:25:00.656 00:25:00.656 ==================== 00:25:00.656 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:00.656 TCP transport: 00:25:00.656 polls: 29910 00:25:00.656 idle_polls: 12918 00:25:00.656 sock_completions: 16992 00:25:00.656 nvme_completions: 8223 00:25:00.656 submitted_requests: 12302 00:25:00.656 queued_requests: 1 00:25:00.656 00:25:00.656 ==================== 00:25:00.656 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:00.656 TCP transport: 00:25:00.656 polls: 30123 00:25:00.656 idle_polls: 17153 00:25:00.656 sock_completions: 12970 00:25:00.656 nvme_completions: 7481 00:25:00.656 submitted_requests: 11242 00:25:00.656 queued_requests: 1 00:25:00.656 ======================================================== 00:25:00.656 Latency(us) 00:25:00.656 Device Information : IOPS MiB/s Average min max 00:25:00.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2055.20 513.80 62807.40 33415.64 96774.03 00:25:00.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1869.72 467.43 69062.31 31420.67 132317.62 00:25:00.656 ======================================================== 00:25:00.656 Total : 3924.92 981.23 65787.06 31420.67 132317.62 00:25:00.656 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.656 rmmod nvme_tcp 00:25:00.656 rmmod nvme_fabrics 00:25:00.656 rmmod nvme_keyring 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2063010 ']' 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2063010 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2063010 ']' 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2063010 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063010 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063010' 00:25:00.656 killing process with pid 2063010 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2063010 00:25:00.656 17:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2063010 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.569 17:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.116 00:25:05.116 real 0m24.303s 00:25:05.116 user 0m58.222s 00:25:05.116 sys 0m8.721s 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:05.116 ************************************ 00:25:05.116 END TEST nvmf_perf 00:25:05.116 ************************************ 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.116 ************************************ 00:25:05.116 START TEST nvmf_fio_host 00:25:05.116 ************************************ 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:05.116 * Looking for test storage... 00:25:05.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.116 17:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.116 --rc genhtml_branch_coverage=1 00:25:05.116 --rc genhtml_function_coverage=1 00:25:05.116 --rc genhtml_legend=1 00:25:05.116 --rc geninfo_all_blocks=1 00:25:05.116 --rc geninfo_unexecuted_blocks=1 00:25:05.116 00:25:05.116 ' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.116 --rc genhtml_branch_coverage=1 00:25:05.116 --rc genhtml_function_coverage=1 00:25:05.116 --rc genhtml_legend=1 00:25:05.116 --rc geninfo_all_blocks=1 00:25:05.116 --rc geninfo_unexecuted_blocks=1 00:25:05.116 00:25:05.116 ' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.116 --rc genhtml_branch_coverage=1 00:25:05.116 --rc genhtml_function_coverage=1 00:25:05.116 --rc genhtml_legend=1 00:25:05.116 --rc geninfo_all_blocks=1 00:25:05.116 --rc geninfo_unexecuted_blocks=1 00:25:05.116 00:25:05.116 ' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.116 --rc genhtml_branch_coverage=1 00:25:05.116 --rc genhtml_function_coverage=1 00:25:05.116 --rc genhtml_legend=1 00:25:05.116 --rc geninfo_all_blocks=1 00:25:05.116 --rc geninfo_unexecuted_blocks=1 00:25:05.116 00:25:05.116 ' 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.116 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.117 17:07:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.271 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:13.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:13.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:13.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:13.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:25:13.272 00:25:13.272 --- 10.0.0.2 ping statistics --- 00:25:13.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.272 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:25:13.272 00:25:13.272 --- 10.0.0.1 ping statistics --- 00:25:13.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.272 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2069838 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2069838 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2069838 ']' 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.272 17:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.272 [2024-11-20 17:08:04.728781] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:25:13.272 [2024-11-20 17:08:04.728852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.272 [2024-11-20 17:08:04.829600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.272 [2024-11-20 17:08:04.883587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.272 [2024-11-20 17:08:04.883641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.273 [2024-11-20 17:08:04.883651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.273 [2024-11-20 17:08:04.883658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.273 [2024-11-20 17:08:04.883665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.273 [2024-11-20 17:08:04.885798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.273 [2024-11-20 17:08:04.885966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.273 [2024-11-20 17:08:04.886124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.273 [2024-11-20 17:08:04.886125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.534 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.534 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:13.534 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:13.794 [2024-11-20 17:08:05.716932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.794 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:13.794 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.794 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.794 17:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:14.056 Malloc1 00:25:14.056 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.056 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:14.318 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.579 [2024-11-20 17:08:06.590847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.579 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:14.842 17:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:15.103 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:15.103 fio-3.35 00:25:15.103 Starting 1 thread 00:25:17.671 00:25:17.671 test: (groupid=0, jobs=1): err= 0: pid=2070607: Wed Nov 20 17:08:09 2024 00:25:17.671 read: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(94.5MiB/2005msec) 00:25:17.671 slat (usec): min=2, max=230, avg= 2.14, stdev= 2.08 00:25:17.671 clat (usec): min=3079, max=9491, avg=5843.38, stdev=1236.36 00:25:17.671 lat (usec): min=3113, max=9493, avg=5845.52, stdev=1236.38 00:25:17.671 clat percentiles (usec): 00:25:17.671 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4948], 00:25:17.671 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5407], 00:25:17.671 | 70.00th=[ 5735], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8225], 00:25:17.671 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[ 9241], 99.95th=[ 9241], 00:25:17.671 | 99.99th=[ 9241] 00:25:17.671 bw ( KiB/s): min=34896, max=55928, per=100.00%, avg=48256.00, stdev=9770.08, samples=4 00:25:17.671 iops : min= 8724, max=13982, avg=12064.00, stdev=2442.52, samples=4 00:25:17.671 write: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(94.1MiB/2005msec); 0 zone resets 00:25:17.671 slat (usec): min=2, max=220, avg= 2.22, stdev= 1.57 00:25:17.671 clat (usec): min=2362, max=8245, avg=4714.25, stdev=995.18 00:25:17.671 lat (usec): min=2387, max=8248, avg=4716.47, stdev=995.23 00:25:17.671 clat percentiles (usec): 00:25:17.671 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3851], 20.00th=[ 3982], 00:25:17.671 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:25:17.671 | 70.00th=[ 4621], 80.00th=[ 5997], 90.00th=[ 6390], 95.00th=[ 6652], 00:25:17.671 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7570], 00:25:17.671 | 99.99th=[ 8160] 00:25:17.671 bw ( KiB/s): min=35792, max=55552, per=99.98%, avg=48058.00, stdev=9280.03, samples=4 00:25:17.671 iops : min= 8948, max=13888, avg=12014.50, stdev=2320.01, samples=4 00:25:17.671 lat (msec) : 4=11.19%, 10=88.81% 00:25:17.671 cpu : usr=72.36%, sys=26.25%, ctx=40, majf=0, minf=17 00:25:17.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:17.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:17.671 issued rwts: total=24189,24095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:17.671 00:25:17.671 Run status group 0 (all jobs): 00:25:17.671 READ: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=94.5MiB (99.1MB), run=2005-2005msec 00:25:17.671 WRITE: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=94.1MiB (98.7MB), run=2005-2005msec 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.671 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.672 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:17.672 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:17.672 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:17.672 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:17.672 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:17.672 17:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:17.938 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:17.938 fio-3.35 00:25:17.938 Starting 1 thread 00:25:20.484 00:25:20.484 test: (groupid=0, jobs=1): err= 0: pid=2071222: Wed Nov 20 17:08:12 2024 00:25:20.484 read: IOPS=9691, BW=151MiB/s (159MB/s)(304MiB/2006msec) 00:25:20.484 slat (usec): min=3, max=110, avg= 3.59, stdev= 1.57 00:25:20.484 clat (usec): min=1338, max=14326, avg=8026.89, stdev=1978.32 00:25:20.484 lat (usec): min=1342, max=14330, avg=8030.48, stdev=1978.46 00:25:20.484 clat percentiles (usec): 00:25:20.484 | 1.00th=[ 3916], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6259], 00:25:20.484 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8455], 00:25:20.484 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11338], 00:25:20.484 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13698], 99.95th=[13960], 00:25:20.484 | 99.99th=[14091] 00:25:20.484 bw ( KiB/s): min=72608, max=79552, per=49.81%, avg=77248.00, stdev=3156.29, samples=4 00:25:20.484 iops : min= 4538, max= 4972, avg=4828.00, stdev=197.27, samples=4 00:25:20.484 write: IOPS=5577, BW=87.1MiB/s (91.4MB/s)(157MiB/1799msec); 0 zone resets 00:25:20.484 slat (usec): min=39, max=447, avg=40.89, stdev= 8.05 00:25:20.484 clat (usec): min=1458, max=16083, avg=9042.62, stdev=1407.73 00:25:20.484 lat (usec): min=1497, max=16220, avg=9083.51, stdev=1409.78 00:25:20.484 clat percentiles (usec): 00:25:20.484 | 1.00th=[ 6128], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7898], 00:25:20.484 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:25:20.484 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:25:20.484 | 99.00th=[12649], 99.50th=[13698], 99.90th=[15533], 99.95th=[15664], 00:25:20.484 | 99.99th=[15926] 00:25:20.484 bw ( KiB/s): min=74624, max=82400, per=89.74%, avg=80088.00, stdev=3677.39, samples=4 00:25:20.484 iops : min= 4664, max= 5150, avg=5005.50, stdev=229.84, samples=4 00:25:20.484 lat (msec) : 2=0.05%, 4=0.76%, 10=79.44%, 20=19.75% 00:25:20.484 cpu : usr=85.19%, sys=13.72%, ctx=13, majf=0, minf=31 00:25:20.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:20.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:20.484 issued rwts: total=19442,10034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:20.484 00:25:20.484 Run status group 0 (all jobs): 00:25:20.484 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=304MiB (319MB), run=2006-2006msec 00:25:20.484 WRITE: bw=87.1MiB/s (91.4MB/s), 87.1MiB/s-87.1MiB/s (91.4MB/s-91.4MB/s), io=157MiB (164MB), run=1799-1799msec 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.484 rmmod nvme_tcp 00:25:20.484 rmmod nvme_fabrics 00:25:20.484 rmmod nvme_keyring 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2069838 ']' 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2069838 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2069838 ']' 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2069838 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069838 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069838' 00:25:20.484 killing process with pid 2069838 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2069838 00:25:20.484 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2069838 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.745 17:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.660 17:08:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.660 00:25:22.660 real 0m17.936s 00:25:22.660 user 1m1.643s 00:25:22.660 sys 0m7.816s 00:25:22.660 17:08:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.660 17:08:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.660 ************************************ 00:25:22.660 END TEST nvmf_fio_host 00:25:22.660 ************************************ 00:25:22.922 17:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:22.922 17:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.922 17:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.922 17:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.922 ************************************ 00:25:22.922 START TEST nvmf_failover 00:25:22.922 ************************************ 00:25:22.922 17:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:22.922 * Looking for test storage... 00:25:22.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.922 17:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.922 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.184 --rc genhtml_branch_coverage=1 00:25:23.184 --rc genhtml_function_coverage=1 00:25:23.184 --rc genhtml_legend=1 00:25:23.184 --rc geninfo_all_blocks=1 00:25:23.184 --rc geninfo_unexecuted_blocks=1 00:25:23.184 00:25:23.184 ' 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.184 --rc genhtml_branch_coverage=1 00:25:23.184 --rc genhtml_function_coverage=1 00:25:23.184 --rc genhtml_legend=1 00:25:23.184 --rc geninfo_all_blocks=1 00:25:23.184 --rc geninfo_unexecuted_blocks=1 00:25:23.184 00:25:23.184 ' 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.184 --rc genhtml_branch_coverage=1 00:25:23.184 --rc genhtml_function_coverage=1 00:25:23.184 --rc genhtml_legend=1 00:25:23.184 --rc geninfo_all_blocks=1 00:25:23.184 --rc geninfo_unexecuted_blocks=1 00:25:23.184 00:25:23.184 ' 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.184 --rc genhtml_branch_coverage=1 00:25:23.184 --rc genhtml_function_coverage=1 00:25:23.184 --rc genhtml_legend=1 00:25:23.184 --rc geninfo_all_blocks=1 00:25:23.184 --rc geninfo_unexecuted_blocks=1 00:25:23.184 00:25:23.184 ' 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.184 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.185 17:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:31.323 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:31.323 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:31.323 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:31.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.323 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:25:31.324 00:25:31.324 --- 10.0.0.2 ping statistics --- 00:25:31.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.324 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:31.324 00:25:31.324 --- 10.0.0.1 ping statistics --- 00:25:31.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.324 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2075900 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2075900 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2075900 ']' 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.324 17:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.324 [2024-11-20 17:08:22.786096] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:25:31.324 [2024-11-20 17:08:22.786172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.324 [2024-11-20 17:08:22.885636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:31.324 [2024-11-20 17:08:22.937313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.324 [2024-11-20 17:08:22.937364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.324 [2024-11-20 17:08:22.937373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.324 [2024-11-20 17:08:22.937380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.324 [2024-11-20 17:08:22.937386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.324 [2024-11-20 17:08:22.939460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.324 [2024-11-20 17:08:22.939685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.324 [2024-11-20 17:08:22.939686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.585 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:31.846 [2024-11-20 17:08:23.825391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.846 17:08:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:32.107 Malloc0 00:25:32.107 17:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.107 17:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:32.368 17:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.631 [2024-11-20 17:08:24.631998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.631 17:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:32.892 [2024-11-20 17:08:24.828622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:32.892 17:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:32.892 [2024-11-20 17:08:25.025326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2076459 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2076459 /var/tmp/bdevperf.sock 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2076459 ']' 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.892 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.837 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.837 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:33.837 17:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.408 NVMe0n1 00:25:34.409 17:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.409 00:25:34.409 17:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2076722 00:25:34.409 17:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:34.409 17:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:35.794 17:08:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.794 17:08:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:39.100 17:08:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:39.100 00:25:39.100 17:08:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:39.100 [2024-11-20 17:08:31.225644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 [2024-11-20 17:08:31.225751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcfcf0 is same with the state(6) to be set 00:25:39.100 17:08:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:42.528 17:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.528 [2024-11-20 17:08:34.418567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.528 17:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:43.484 17:08:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:43.484 [2024-11-20 17:08:35.606082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 [2024-11-20 17:08:35.606155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bf0 is same with the state(6) to be set 00:25:43.484 17:08:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2076722 00:25:50.081 { 00:25:50.081 "results": [ 00:25:50.081 { 00:25:50.081 "job": "NVMe0n1", 00:25:50.081 "core_mask": "0x1", 00:25:50.081 "workload": "verify", 00:25:50.081 "status": "finished", 00:25:50.081 "verify_range": { 00:25:50.081 "start": 0, 00:25:50.081 "length": 16384 00:25:50.081 }, 00:25:50.081 "queue_depth": 128, 00:25:50.081 "io_size": 4096, 00:25:50.081 "runtime": 15.009749, 00:25:50.081 "iops": 12396.809566902151, 00:25:50.081 "mibps": 48.42503737071153, 00:25:50.081 "io_failed": 9653, 00:25:50.081 "io_timeout": 0, 00:25:50.081 "avg_latency_us": 9795.159102554933, 00:25:50.081 "min_latency_us": 542.72, 00:25:50.081 "max_latency_us": 19879.253333333334 00:25:50.081 } 00:25:50.081 ], 00:25:50.081 "core_count": 1 00:25:50.081 } 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2076459 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2076459 ']' 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2076459 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2076459 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2076459' 00:25:50.081 killing process with pid 2076459 00:25:50.081 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2076459 00:25:50.082 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2076459 00:25:50.082 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.082 [2024-11-20 17:08:25.114861] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:25:50.082 [2024-11-20 17:08:25.114939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076459 ] 00:25:50.082 [2024-11-20 17:08:25.207429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.082 [2024-11-20 17:08:25.259082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.082 Running I/O for 15 seconds... 00:25:50.082 11211.00 IOPS, 43.79 MiB/s [2024-11-20T16:08:42.258Z] [2024-11-20 17:08:27.731456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.082 [2024-11-20 17:08:27.731500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.082 [2024-11-20 17:08:27.731527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.731983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.731991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.732001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.082 [2024-11-20 17:08:27.732009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.082 [2024-11-20 17:08:27.732018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.083 [2024-11-20 17:08:27.732263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.083 [2024-11-20 17:08:27.732574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.083 [2024-11-20 17:08:27.732583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.084 [2024-11-20 17:08:27.732591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.084 [2024-11-20 17:08:27.732610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.084 [2024-11-20 17:08:27.732628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.084 [2024-11-20 17:08:27.732645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.732994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.084 [2024-11-20 17:08:27.733065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.084 [2024-11-20 17:08:27.733126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.084 [2024-11-20 17:08:27.733133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.085 [2024-11-20 17:08:27.733711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.085 [2024-11-20 17:08:27.733721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:27.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:27.733756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395ab0 is same with the state(6) to be set 00:25:50.086 [2024-11-20 17:08:27.733776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.086 [2024-11-20 17:08:27.733782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.086 [2024-11-20 17:08:27.733788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:25:50.086 [2024-11-20 17:08:27.733796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733836] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:50.086 [2024-11-20 17:08:27.733860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.086 [2024-11-20 17:08:27.733869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.086 [2024-11-20 17:08:27.733886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.086 [2024-11-20 17:08:27.733904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.086 [2024-11-20 17:08:27.733919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:27.733928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:50.086 [2024-11-20 17:08:27.737548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:50.086 [2024-11-20 17:08:27.737574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1374d70 (9): Bad file descriptor 00:25:50.086 [2024-11-20 17:08:27.891463] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:50.086 10376.50 IOPS, 40.53 MiB/s [2024-11-20T16:08:42.262Z] 10691.00 IOPS, 41.76 MiB/s [2024-11-20T16:08:42.262Z] 11124.25 IOPS, 43.45 MiB/s [2024-11-20T16:08:42.262Z] [2024-11-20 17:08:31.226201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.086 [2024-11-20 17:08:31.226332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.086 [2024-11-20 17:08:31.226488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.086 [2024-11-20 17:08:31.226494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.087 [2024-11-20 17:08:31.226841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.087 [2024-11-20 17:08:31.226846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.226991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.226998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.088 [2024-11-20 17:08:31.227255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.088 [2024-11-20 17:08:31.227261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.089 [2024-11-20 17:08:31.227643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.089 [2024-11-20 17:08:31.227649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.090 [2024-11-20 17:08:31.227661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:31.227831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.090 [2024-11-20 17:08:31.227854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.090 [2024-11-20 17:08:31.227859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68520 len:8 PRP1 0x0 PRP2 0x0 00:25:50.090 [2024-11-20 17:08:31.227864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227897] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:50.090 [2024-11-20 17:08:31.227913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.090 [2024-11-20 17:08:31.227919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.090 [2024-11-20 17:08:31.227931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.090 [2024-11-20 17:08:31.227943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.090 [2024-11-20 17:08:31.227955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:31.227960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:50.090 [2024-11-20 17:08:31.230441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:50.090 [2024-11-20 17:08:31.230462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1374d70 (9): Bad file descriptor 00:25:50.090 [2024-11-20 17:08:31.267731] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:50.090 11362.20 IOPS, 44.38 MiB/s [2024-11-20T16:08:42.266Z] 11638.83 IOPS, 45.46 MiB/s [2024-11-20T16:08:42.266Z] 11839.00 IOPS, 46.25 MiB/s [2024-11-20T16:08:42.266Z] 11999.75 IOPS, 46.87 MiB/s [2024-11-20T16:08:42.266Z] [2024-11-20 17:08:35.608262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.090 [2024-11-20 17:08:35.608402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.090 [2024-11-20 17:08:35.608420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.090 [2024-11-20 17:08:35.608425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.091 [2024-11-20 17:08:35.608844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.091 [2024-11-20 17:08:35.608849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.608989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.608995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.092 [2024-11-20 17:08:35.609141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.092 [2024-11-20 17:08:35.609147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.093 [2024-11-20 17:08:35.609513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.093 [2024-11-20 17:08:35.609518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.094 [2024-11-20 17:08:35.609530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.094 [2024-11-20 17:08:35.609541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14184 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14192 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14200 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14216 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14224 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14232 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13296 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13304 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13320 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13328 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13336 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14248 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14256 len:8 PRP1 0x0 PRP2 0x0 00:25:50.094 [2024-11-20 17:08:35.609894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.094 [2024-11-20 17:08:35.609899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.094 [2024-11-20 17:08:35.609903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.094 [2024-11-20 17:08:35.609908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14264 len:8 PRP1 0x0 PRP2 0x0 00:25:50.095 [2024-11-20 17:08:35.609913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.609918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.095 [2024-11-20 17:08:35.609922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.095 [2024-11-20 17:08:35.609926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:8 PRP1 0x0 PRP2 0x0 00:25:50.095 [2024-11-20 17:08:35.609931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.609936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.095 [2024-11-20 17:08:35.609940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.095 [2024-11-20 17:08:35.609944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14280 len:8 PRP1 0x0 PRP2 0x0 00:25:50.095 [2024-11-20 17:08:35.609948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.609953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.095 [2024-11-20 17:08:35.609958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.095 [2024-11-20 17:08:35.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14288 len:8 PRP1 0x0 PRP2 0x0 00:25:50.095 [2024-11-20 17:08:35.609967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.609973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.095 [2024-11-20 17:08:35.609976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.095 [2024-11-20 17:08:35.609980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14296 len:8 PRP1 0x0 PRP2 0x0 00:25:50.095 [2024-11-20 17:08:35.609985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.609990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.095 [2024-11-20 17:08:35.624066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.095 [2024-11-20 17:08:35.624092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:8 PRP1 0x0 PRP2 0x0 00:25:50.095 [2024-11-20 17:08:35.624101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.624148] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:50.095 [2024-11-20 17:08:35.624187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.095 [2024-11-20 17:08:35.624199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.624206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.095 [2024-11-20 17:08:35.624213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.624219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.095 [2024-11-20 17:08:35.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.624230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.095 [2024-11-20 17:08:35.624235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.095 [2024-11-20 17:08:35.624240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:50.095 [2024-11-20 17:08:35.624275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1374d70 (9): Bad file descriptor 00:25:50.095 [2024-11-20 17:08:35.627142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:50.095 [2024-11-20 17:08:35.651563] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:50.095 12049.67 IOPS, 47.07 MiB/s [2024-11-20T16:08:42.271Z] 12151.70 IOPS, 47.47 MiB/s [2024-11-20T16:08:42.271Z] 12223.82 IOPS, 47.75 MiB/s [2024-11-20T16:08:42.271Z] 12277.25 IOPS, 47.96 MiB/s [2024-11-20T16:08:42.271Z] 12332.08 IOPS, 48.17 MiB/s [2024-11-20T16:08:42.271Z] 12360.36 IOPS, 48.28 MiB/s [2024-11-20T16:08:42.271Z] 12396.40 IOPS, 48.42 MiB/s 00:25:50.095 Latency(us) 00:25:50.095 [2024-11-20T16:08:42.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.095 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:50.095 Verification LBA range: start 0x0 length 0x4000 00:25:50.095 NVMe0n1 : 15.01 12396.81 48.43 643.12 0.00 9795.16 542.72 19879.25 00:25:50.095 [2024-11-20T16:08:42.271Z] =================================================================================================================== 00:25:50.095 [2024-11-20T16:08:42.271Z] Total : 12396.81 48.43 643.12 0.00 9795.16 542.72 19879.25 00:25:50.095 Received shutdown signal, test time was about 15.000000 seconds 00:25:50.095 00:25:50.095 Latency(us) 00:25:50.095 [2024-11-20T16:08:42.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.095 [2024-11-20T16:08:42.271Z] =================================================================================================================== 00:25:50.095 [2024-11-20T16:08:42.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2079602 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2079602 /var/tmp/bdevperf.sock 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2079602 ']' 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.095 17:08:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:50.667 17:08:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.667 17:08:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:50.667 17:08:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.927 [2024-11-20 17:08:42.909091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:50.927 17:08:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:50.927 [2024-11-20 17:08:43.093533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:51.188 17:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:51.449 NVMe0n1 00:25:51.449 17:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:52.021 00:25:52.021 17:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:52.282 00:25:52.282 17:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:52.282 17:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:52.542 17:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.542 17:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:55.842 17:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:55.842 17:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:55.842 17:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2080847 00:25:55.842 17:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:55.842 17:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2080847 00:25:56.784 { 00:25:56.784 "results": [ 00:25:56.784 { 00:25:56.784 "job": "NVMe0n1", 00:25:56.784 "core_mask": "0x1", 00:25:56.784 "workload": "verify", 00:25:56.784 "status": "finished", 00:25:56.784 "verify_range": { 00:25:56.784 "start": 0, 00:25:56.784 "length": 16384 00:25:56.784 }, 00:25:56.784 "queue_depth": 128, 00:25:56.784 "io_size": 4096, 00:25:56.784 "runtime": 1.003422, 00:25:56.784 "iops": 12937.727097871086, 00:25:56.784 "mibps": 50.53799647605893, 00:25:56.784 "io_failed": 0, 00:25:56.784 "io_timeout": 0, 00:25:56.784 "avg_latency_us": 9857.319110563345, 00:25:56.784 "min_latency_us": 2048.0, 00:25:56.784 "max_latency_us": 9721.173333333334 00:25:56.784 } 00:25:56.784 ], 00:25:56.784 "core_count": 1 00:25:56.784 } 00:25:56.784 17:08:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.784 [2024-11-20 17:08:41.963734] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:25:56.784 [2024-11-20 17:08:41.963795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079602 ] 00:25:56.784 [2024-11-20 17:08:42.046912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.784 [2024-11-20 17:08:42.076443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.784 [2024-11-20 17:08:44.617820] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:56.784 [2024-11-20 17:08:44.617858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.784 [2024-11-20 17:08:44.617868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.784 [2024-11-20 17:08:44.617875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.784 [2024-11-20 17:08:44.617881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.784 [2024-11-20 17:08:44.617887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.784 [2024-11-20 17:08:44.617893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.784 [2024-11-20 17:08:44.617898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.784 [2024-11-20 17:08:44.617904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.784 [2024-11-20 17:08:44.617909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:56.784 [2024-11-20 17:08:44.617930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:56.784 [2024-11-20 17:08:44.617943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148fd70 (9): Bad file descriptor 00:25:56.784 [2024-11-20 17:08:44.709359] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:56.784 Running I/O for 1 seconds... 00:25:56.784 12854.00 IOPS, 50.21 MiB/s 00:25:56.784 Latency(us) 00:25:56.784 [2024-11-20T16:08:48.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.784 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:56.784 Verification LBA range: start 0x0 length 0x4000 00:25:56.784 NVMe0n1 : 1.00 12937.73 50.54 0.00 0.00 9857.32 2048.00 9721.17 00:25:56.784 [2024-11-20T16:08:48.960Z] =================================================================================================================== 00:25:56.784 [2024-11-20T16:08:48.960Z] Total : 12937.73 50.54 0.00 0.00 9857.32 2048.00 9721.17 00:25:56.784 17:08:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:56.784 17:08:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:57.045 17:08:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:57.306 17:08:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:57.306 17:08:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:57.566 17:08:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:57.566 17:08:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2079602 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2079602 ']' 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2079602 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079602 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079602' 00:26:00.868 killing process with pid 2079602 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2079602 00:26:00.868 17:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2079602 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.128 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.128 rmmod nvme_tcp 00:26:01.128 rmmod nvme_fabrics 00:26:01.390 rmmod nvme_keyring 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2075900 ']' 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2075900 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2075900 ']' 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2075900 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2075900 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2075900' 00:26:01.390 killing process with pid 2075900 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2075900 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2075900 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.390 17:08:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:03.934 00:26:03.934 real 0m40.702s 00:26:03.934 user 2m5.087s 00:26:03.934 sys 0m8.850s 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:03.934 ************************************ 00:26:03.934 END TEST nvmf_failover 00:26:03.934 ************************************ 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.934 ************************************ 00:26:03.934 START TEST nvmf_host_discovery 00:26:03.934 ************************************ 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:03.934 * Looking for test storage... 00:26:03.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:03.934 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:03.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.935 --rc genhtml_branch_coverage=1 00:26:03.935 --rc genhtml_function_coverage=1 00:26:03.935 --rc genhtml_legend=1 00:26:03.935 --rc geninfo_all_blocks=1 00:26:03.935 --rc geninfo_unexecuted_blocks=1 00:26:03.935 00:26:03.935 ' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:03.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.935 --rc genhtml_branch_coverage=1 00:26:03.935 --rc genhtml_function_coverage=1 00:26:03.935 --rc genhtml_legend=1 00:26:03.935 --rc geninfo_all_blocks=1 00:26:03.935 --rc geninfo_unexecuted_blocks=1 00:26:03.935 00:26:03.935 ' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:03.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.935 --rc genhtml_branch_coverage=1 00:26:03.935 --rc genhtml_function_coverage=1 00:26:03.935 --rc genhtml_legend=1 00:26:03.935 --rc geninfo_all_blocks=1 00:26:03.935 --rc geninfo_unexecuted_blocks=1 00:26:03.935 00:26:03.935 ' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:03.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.935 --rc genhtml_branch_coverage=1 00:26:03.935 --rc genhtml_function_coverage=1 00:26:03.935 --rc genhtml_legend=1 00:26:03.935 --rc geninfo_all_blocks=1 00:26:03.935 --rc geninfo_unexecuted_blocks=1 00:26:03.935 00:26:03.935 ' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:03.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:03.935 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:03.936 17:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:12.084 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:12.084 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:12.084 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.084 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:12.085 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:26:12.085 00:26:12.085 --- 10.0.0.2 ping statistics --- 00:26:12.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.085 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:26:12.085 00:26:12.085 --- 10.0.0.1 ping statistics --- 00:26:12.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.085 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2086176 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2086176 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2086176 ']' 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.085 17:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.085 [2024-11-20 17:09:03.445043] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:26:12.085 [2024-11-20 17:09:03.445110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.085 [2024-11-20 17:09:03.547833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.085 [2024-11-20 17:09:03.599490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.085 [2024-11-20 17:09:03.599541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.085 [2024-11-20 17:09:03.599551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.085 [2024-11-20 17:09:03.599558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.085 [2024-11-20 17:09:03.599565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.085 [2024-11-20 17:09:03.600326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 [2024-11-20 17:09:04.324443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 [2024-11-20 17:09:04.336764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 null0 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 null1 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2086319 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2086319 /tmp/host.sock 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2086319 ']' 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:12.346 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.346 17:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.346 [2024-11-20 17:09:04.435212] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:26:12.346 [2024-11-20 17:09:04.435276] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086319 ] 00:26:12.607 [2024-11-20 17:09:04.525332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.607 [2024-11-20 17:09:04.578846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:13.178 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 [2024-11-20 17:09:05.599993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:13.439 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:13.701 17:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:14.272 [2024-11-20 17:09:06.313167] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:14.272 [2024-11-20 17:09:06.313193] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:14.272 [2024-11-20 17:09:06.313207] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:14.272 [2024-11-20 17:09:06.440609] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:14.533 [2024-11-20 17:09:06.541495] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:14.533 [2024-11-20 17:09:06.542565] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ff47a0:1 started. 00:26:14.533 [2024-11-20 17:09:06.544197] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:14.533 [2024-11-20 17:09:06.544216] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:14.533 [2024-11-20 17:09:06.552391] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ff47a0 was disconnected and freed. delete nvme_qpair. 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.794 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:14.795 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.056 17:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.056 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:15.056 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:15.056 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:15.056 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:15.056 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.057 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.318 [2024-11-20 17:09:07.248261] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fc3120:1 started. 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.318 [2024-11-20 17:09:07.294858] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fc3120 was disconnected and freed. delete nvme_qpair. 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:15.318 17:09:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.259 [2024-11-20 17:09:08.387451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.259 [2024-11-20 17:09:08.387653] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:16.259 [2024-11-20 17:09:08.387677] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:16.259 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:16.521 [2024-11-20 17:09:08.516071] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:16.521 17:09:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:16.783 [2024-11-20 17:09:08.787510] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:16.783 [2024-11-20 17:09:08.787544] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:16.783 [2024-11-20 17:09:08.787552] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:16.783 [2024-11-20 17:09:08.787556] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.725 [2024-11-20 17:09:09.627430] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:17.725 [2024-11-20 17:09:09.627448] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.725 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:17.725 [2024-11-20 17:09:09.635304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.725 [2024-11-20 17:09:09.635320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.725 [2024-11-20 17:09:09.635326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.725 [2024-11-20 17:09:09.635332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.725 [2024-11-20 17:09:09.635338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.725 [2024-11-20 17:09:09.635343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.725 [2024-11-20 17:09:09.635350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.726 [2024-11-20 17:09:09.635355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.726 [2024-11-20 17:09:09.635361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.726 [2024-11-20 17:09:09.645320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.726 [2024-11-20 17:09:09.655354] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.726 [2024-11-20 17:09:09.655363] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.726 [2024-11-20 17:09:09.655367] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.655372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.726 [2024-11-20 17:09:09.655386] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.655734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.726 [2024-11-20 17:09:09.655746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.726 [2024-11-20 17:09:09.655752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.726 [2024-11-20 17:09:09.655764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.726 [2024-11-20 17:09:09.655778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.726 [2024-11-20 17:09:09.655784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.726 [2024-11-20 17:09:09.655790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.726 [2024-11-20 17:09:09.655795] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.726 [2024-11-20 17:09:09.655799] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.726 [2024-11-20 17:09:09.655802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.726 [2024-11-20 17:09:09.665415] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.726 [2024-11-20 17:09:09.665424] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.726 [2024-11-20 17:09:09.665428] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.665431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.726 [2024-11-20 17:09:09.665442] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.665618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.726 [2024-11-20 17:09:09.665628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.726 [2024-11-20 17:09:09.665633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.726 [2024-11-20 17:09:09.665641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.726 [2024-11-20 17:09:09.665649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.726 [2024-11-20 17:09:09.665653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.726 [2024-11-20 17:09:09.665658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.726 [2024-11-20 17:09:09.665663] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.726 [2024-11-20 17:09:09.665666] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.726 [2024-11-20 17:09:09.665669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.726 [2024-11-20 17:09:09.675471] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.726 [2024-11-20 17:09:09.675481] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.726 [2024-11-20 17:09:09.675484] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.675487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.726 [2024-11-20 17:09:09.675498] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.675831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.726 [2024-11-20 17:09:09.675841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.726 [2024-11-20 17:09:09.675849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.726 [2024-11-20 17:09:09.675857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.726 [2024-11-20 17:09:09.675870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.726 [2024-11-20 17:09:09.675874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.726 [2024-11-20 17:09:09.675880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.726 [2024-11-20 17:09:09.675884] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.726 [2024-11-20 17:09:09.675887] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.726 [2024-11-20 17:09:09.675890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.726 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:17.726 [2024-11-20 17:09:09.685527] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.726 [2024-11-20 17:09:09.685536] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.726 [2024-11-20 17:09:09.685539] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.685543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.726 [2024-11-20 17:09:09.685553] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.726 [2024-11-20 17:09:09.685884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.726 [2024-11-20 17:09:09.685893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.726 [2024-11-20 17:09:09.685898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.726 [2024-11-20 17:09:09.685906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.726 [2024-11-20 17:09:09.685917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.726 [2024-11-20 17:09:09.685922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.727 [2024-11-20 17:09:09.685927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.727 [2024-11-20 17:09:09.685932] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.727 [2024-11-20 17:09:09.685935] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.727 [2024-11-20 17:09:09.685938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.727 [2024-11-20 17:09:09.695582] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.727 [2024-11-20 17:09:09.695592] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.727 [2024-11-20 17:09:09.695595] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.727 [2024-11-20 17:09:09.695598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.727 [2024-11-20 17:09:09.695609] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.727 [2024-11-20 17:09:09.695951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.727 [2024-11-20 17:09:09.695961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.727 [2024-11-20 17:09:09.695966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.727 [2024-11-20 17:09:09.695974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.727 [2024-11-20 17:09:09.695987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.727 [2024-11-20 17:09:09.695992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.727 [2024-11-20 17:09:09.695997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.727 [2024-11-20 17:09:09.696002] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.727 [2024-11-20 17:09:09.696005] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.727 [2024-11-20 17:09:09.696008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.727 [2024-11-20 17:09:09.705638] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.727 [2024-11-20 17:09:09.705646] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.727 [2024-11-20 17:09:09.705649] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.727 [2024-11-20 17:09:09.705653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.727 [2024-11-20 17:09:09.705663] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.727 [2024-11-20 17:09:09.705947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.727 [2024-11-20 17:09:09.705956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.727 [2024-11-20 17:09:09.705961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.727 [2024-11-20 17:09:09.705969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.727 [2024-11-20 17:09:09.705983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.727 [2024-11-20 17:09:09.705987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.727 [2024-11-20 17:09:09.705992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.727 [2024-11-20 17:09:09.705997] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.727 [2024-11-20 17:09:09.706000] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.727 [2024-11-20 17:09:09.706003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.727 [2024-11-20 17:09:09.715692] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:17.727 [2024-11-20 17:09:09.715700] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:17.727 [2024-11-20 17:09:09.715703] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:17.727 [2024-11-20 17:09:09.715707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:17.727 [2024-11-20 17:09:09.715717] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:17.727 [2024-11-20 17:09:09.715935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.727 [2024-11-20 17:09:09.715943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4e10 with addr=10.0.0.2, port=4420 00:26:17.727 [2024-11-20 17:09:09.715949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4e10 is same with the state(6) to be set 00:26:17.727 [2024-11-20 17:09:09.715957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4e10 (9): Bad file descriptor 00:26:17.727 [2024-11-20 17:09:09.715965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:17.727 [2024-11-20 17:09:09.715969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:17.727 [2024-11-20 17:09:09.715975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:17.727 [2024-11-20 17:09:09.715979] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:17.727 [2024-11-20 17:09:09.715982] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:17.727 [2024-11-20 17:09:09.715985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:17.727 [2024-11-20 17:09:09.716054] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:17.727 [2024-11-20 17:09:09.716066] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:17.727 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.988 17:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.927 [2024-11-20 17:09:11.005464] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:18.927 [2024-11-20 17:09:11.005482] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:18.927 [2024-11-20 17:09:11.005492] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:18.927 [2024-11-20 17:09:11.093737] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:19.193 [2024-11-20 17:09:11.157470] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:19.194 [2024-11-20 17:09:11.158186] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1fc4120:1 started. 00:26:19.194 [2024-11-20 17:09:11.159560] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:19.194 [2024-11-20 17:09:11.159584] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.194 [2024-11-20 17:09:11.163473] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1fc4120 was disconnected and freed. delete nvme_qpair. 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.194 request: 00:26:19.194 { 00:26:19.194 "name": "nvme", 00:26:19.194 "trtype": "tcp", 00:26:19.194 "traddr": "10.0.0.2", 00:26:19.194 "adrfam": "ipv4", 00:26:19.194 "trsvcid": "8009", 00:26:19.194 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.194 "wait_for_attach": true, 00:26:19.194 "method": "bdev_nvme_start_discovery", 00:26:19.194 "req_id": 1 00:26:19.194 } 00:26:19.194 Got JSON-RPC error response 00:26:19.194 response: 00:26:19.194 { 00:26:19.194 "code": -17, 00:26:19.194 "message": "File exists" 00:26:19.194 } 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.194 request: 00:26:19.194 { 00:26:19.194 "name": "nvme_second", 00:26:19.194 "trtype": "tcp", 00:26:19.194 "traddr": "10.0.0.2", 00:26:19.194 "adrfam": "ipv4", 00:26:19.194 "trsvcid": "8009", 00:26:19.194 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.194 "wait_for_attach": true, 00:26:19.194 "method": "bdev_nvme_start_discovery", 00:26:19.194 "req_id": 1 00:26:19.194 } 00:26:19.194 Got JSON-RPC error response 00:26:19.194 response: 00:26:19.194 { 00:26:19.194 "code": -17, 00:26:19.194 "message": "File exists" 00:26:19.194 } 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.194 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.543 17:09:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.487 [2024-11-20 17:09:12.408422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-11-20 17:09:12.408458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdecc0 with addr=10.0.0.2, port=8010 00:26:20.487 [2024-11-20 17:09:12.408471] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:20.487 [2024-11-20 17:09:12.408482] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:20.487 [2024-11-20 17:09:12.408488] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:21.428 [2024-11-20 17:09:13.410616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.428 [2024-11-20 17:09:13.410637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdecc0 with addr=10.0.0.2, port=8010 00:26:21.428 [2024-11-20 17:09:13.410646] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:21.428 [2024-11-20 17:09:13.410651] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:21.428 [2024-11-20 17:09:13.410656] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:22.370 [2024-11-20 17:09:14.412644] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:22.370 request: 00:26:22.370 { 00:26:22.370 "name": "nvme_second", 00:26:22.370 "trtype": "tcp", 00:26:22.370 "traddr": "10.0.0.2", 00:26:22.370 "adrfam": "ipv4", 00:26:22.370 "trsvcid": "8010", 00:26:22.370 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:22.370 "wait_for_attach": false, 00:26:22.370 "attach_timeout_ms": 3000, 00:26:22.370 "method": "bdev_nvme_start_discovery", 00:26:22.370 "req_id": 1 00:26:22.370 } 00:26:22.370 Got JSON-RPC error response 00:26:22.370 response: 00:26:22.370 { 00:26:22.370 "code": -110, 00:26:22.370 "message": "Connection timed out" 00:26:22.370 } 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2086319 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.370 rmmod nvme_tcp 00:26:22.370 rmmod nvme_fabrics 00:26:22.370 rmmod nvme_keyring 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2086176 ']' 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2086176 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2086176 ']' 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2086176 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:22.370 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2086176 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2086176' 00:26:22.632 killing process with pid 2086176 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2086176 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2086176 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.632 17:09:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.177 00:26:25.177 real 0m21.113s 00:26:25.177 user 0m25.238s 00:26:25.177 sys 0m7.169s 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.177 ************************************ 00:26:25.177 END TEST nvmf_host_discovery 00:26:25.177 ************************************ 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.177 ************************************ 00:26:25.177 START TEST nvmf_host_multipath_status 00:26:25.177 ************************************ 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:25.177 * Looking for test storage... 00:26:25.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:25.177 17:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.178 --rc genhtml_branch_coverage=1 00:26:25.178 --rc genhtml_function_coverage=1 00:26:25.178 --rc genhtml_legend=1 00:26:25.178 --rc geninfo_all_blocks=1 00:26:25.178 --rc geninfo_unexecuted_blocks=1 00:26:25.178 00:26:25.178 ' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.178 --rc genhtml_branch_coverage=1 00:26:25.178 --rc genhtml_function_coverage=1 00:26:25.178 --rc genhtml_legend=1 00:26:25.178 --rc geninfo_all_blocks=1 00:26:25.178 --rc geninfo_unexecuted_blocks=1 00:26:25.178 00:26:25.178 ' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.178 --rc genhtml_branch_coverage=1 00:26:25.178 --rc genhtml_function_coverage=1 00:26:25.178 --rc genhtml_legend=1 00:26:25.178 --rc geninfo_all_blocks=1 00:26:25.178 --rc geninfo_unexecuted_blocks=1 00:26:25.178 00:26:25.178 ' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.178 --rc genhtml_branch_coverage=1 00:26:25.178 --rc genhtml_function_coverage=1 00:26:25.178 --rc genhtml_legend=1 00:26:25.178 --rc geninfo_all_blocks=1 00:26:25.178 --rc geninfo_unexecuted_blocks=1 00:26:25.178 00:26:25.178 ' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:25.178 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.179 17:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.320 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:26:33.321 00:26:33.321 --- 10.0.0.2 ping statistics --- 00:26:33.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.321 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:26:33.321 00:26:33.321 --- 10.0.0.1 ping statistics --- 00:26:33.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.321 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2093070 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2093070 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2093070 ']' 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.321 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.322 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.322 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.322 17:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.322 [2024-11-20 17:09:24.672959] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:26:33.322 [2024-11-20 17:09:24.673029] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.322 [2024-11-20 17:09:24.773913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:33.322 [2024-11-20 17:09:24.825388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.322 [2024-11-20 17:09:24.825438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.322 [2024-11-20 17:09:24.825448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.322 [2024-11-20 17:09:24.825455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.322 [2024-11-20 17:09:24.825461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.322 [2024-11-20 17:09:24.827128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.322 [2024-11-20 17:09:24.827134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.322 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.322 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:33.322 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.322 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:33.322 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.583 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.583 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2093070 00:26:33.583 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:33.583 [2024-11-20 17:09:25.688184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.583 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:33.844 Malloc0 00:26:33.844 17:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:34.105 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:34.366 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.366 [2024-11-20 17:09:26.508204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:34.628 [2024-11-20 17:09:26.704739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2093569 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2093569 /var/tmp/bdevperf.sock 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2093569 ']' 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.628 17:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:35.571 17:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.571 17:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:35.571 17:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:35.832 17:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:36.092 Nvme0n1 00:26:36.092 17:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:36.353 Nvme0n1 00:26:36.613 17:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:36.613 17:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:38.529 17:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:38.529 17:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:38.790 17:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.790 17:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:40.184 17:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:40.184 17:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.184 17:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.184 17:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.184 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.184 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:40.184 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.184 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.185 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.185 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.185 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.185 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.448 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.448 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.448 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.448 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.709 17:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.970 17:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.970 17:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:40.970 17:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.231 17:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.492 17:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:42.431 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:42.431 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:42.431 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.431 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.691 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.950 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.950 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.950 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.950 17:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.210 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.470 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.470 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:43.470 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.730 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:43.730 17:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:45.111 17:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:45.111 17:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.111 17:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.112 17:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.112 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.372 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.372 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.372 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.372 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.633 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.894 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.894 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:45.894 17:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.155 17:09:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:46.155 17:09:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.541 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:47.802 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.802 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:47.802 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.802 17:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.063 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.063 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.063 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.063 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:48.323 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:48.585 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:48.846 17:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:49.788 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:49.788 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:49.788 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.788 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.049 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.049 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:50.049 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.049 17:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.049 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.049 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.049 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.049 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.311 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.312 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.312 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.312 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.573 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:50.834 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.834 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:50.834 17:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:51.095 17:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:51.095 17:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.482 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.744 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.744 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.744 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.744 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.006 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.006 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:53.006 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.006 17:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.006 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.006 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:53.006 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.006 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.267 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.267 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:53.529 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:53.529 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:53.529 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:53.790 17:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:54.900 17:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:54.900 17:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:54.900 17:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.900 17:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.169 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.430 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.430 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.430 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.430 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.692 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.952 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.952 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:55.952 17:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:56.213 17:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:56.213 17:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:57.593 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:57.593 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:57.593 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.593 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.593 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.593 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:57.594 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.594 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.594 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.594 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.594 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.594 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.854 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.854 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.854 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.854 17:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.137 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.398 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.398 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:58.398 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:58.658 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:58.918 17:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:59.857 17:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:59.857 17:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:59.857 17:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.857 17:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.117 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.378 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.378 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.378 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.378 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.638 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.898 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.898 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:00.898 17:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:01.159 17:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:01.159 17:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.542 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.802 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.802 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.802 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.802 17:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.063 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.063 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.063 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.063 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2093569 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2093569 ']' 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2093569 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.323 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2093569 00:27:03.603 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:03.604 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:03.604 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2093569' 00:27:03.604 killing process with pid 2093569 00:27:03.604 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2093569 00:27:03.604 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2093569 00:27:03.604 { 00:27:03.604 "results": [ 00:27:03.604 { 00:27:03.604 "job": "Nvme0n1", 00:27:03.604 "core_mask": "0x4", 00:27:03.604 "workload": "verify", 00:27:03.604 "status": "terminated", 00:27:03.604 "verify_range": { 00:27:03.604 "start": 0, 00:27:03.604 "length": 16384 00:27:03.604 }, 00:27:03.604 "queue_depth": 128, 00:27:03.604 "io_size": 4096, 00:27:03.604 "runtime": 26.8678, 00:27:03.604 "iops": 11860.852023611907, 00:27:03.604 "mibps": 46.331453217234014, 00:27:03.604 "io_failed": 0, 00:27:03.604 "io_timeout": 0, 00:27:03.604 "avg_latency_us": 10757.924140310139, 00:27:03.604 "min_latency_us": 802.1333333333333, 00:27:03.604 "max_latency_us": 3019898.88 00:27:03.604 } 00:27:03.604 ], 00:27:03.604 "core_count": 1 00:27:03.604 } 00:27:03.604 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2093569 00:27:03.604 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:03.604 [2024-11-20 17:09:26.800485] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:27:03.604 [2024-11-20 17:09:26.800568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093569 ] 00:27:03.604 [2024-11-20 17:09:26.895390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.604 [2024-11-20 17:09:26.946228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.604 Running I/O for 90 seconds... 00:27:03.604 10377.00 IOPS, 40.54 MiB/s [2024-11-20T16:09:55.780Z] 10805.50 IOPS, 42.21 MiB/s [2024-11-20T16:09:55.780Z] 11077.33 IOPS, 43.27 MiB/s [2024-11-20T16:09:55.780Z] 11377.25 IOPS, 44.44 MiB/s [2024-11-20T16:09:55.780Z] 11708.80 IOPS, 45.74 MiB/s [2024-11-20T16:09:55.780Z] 11878.00 IOPS, 46.40 MiB/s [2024-11-20T16:09:55.780Z] 12006.86 IOPS, 46.90 MiB/s [2024-11-20T16:09:55.780Z] 12118.88 IOPS, 47.34 MiB/s [2024-11-20T16:09:55.780Z] 12200.89 IOPS, 47.66 MiB/s [2024-11-20T16:09:55.780Z] 12275.00 IOPS, 47.95 MiB/s [2024-11-20T16:09:55.780Z] 12333.09 IOPS, 48.18 MiB/s [2024-11-20T16:09:55.780Z] [2024-11-20 17:09:40.602914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.602949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.602984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.602991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.604 [2024-11-20 17:09:40.603579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.604 [2024-11-20 17:09:40.603595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.604 [2024-11-20 17:09:40.603606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.603983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.603989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.605 [2024-11-20 17:09:40.604097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.605 [2024-11-20 17:09:40.604102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.606 [2024-11-20 17:09:40.604732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.606 [2024-11-20 17:09:40.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.604867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.604998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.607 [2024-11-20 17:09:40.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.607 [2024-11-20 17:09:40.605413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.607 [2024-11-20 17:09:40.605419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:40.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:40.605717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:40.605722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.608 12324.25 IOPS, 48.14 MiB/s [2024-11-20T16:09:55.784Z] 11376.23 IOPS, 44.44 MiB/s [2024-11-20T16:09:55.784Z] 10563.64 IOPS, 41.26 MiB/s [2024-11-20T16:09:55.784Z] 9913.47 IOPS, 38.72 MiB/s [2024-11-20T16:09:55.784Z] 10102.44 IOPS, 39.46 MiB/s [2024-11-20T16:09:55.784Z] 10272.82 IOPS, 40.13 MiB/s [2024-11-20T16:09:55.784Z] 10621.61 IOPS, 41.49 MiB/s [2024-11-20T16:09:55.784Z] 10941.26 IOPS, 42.74 MiB/s [2024-11-20T16:09:55.784Z] 11130.90 IOPS, 43.48 MiB/s [2024-11-20T16:09:55.784Z] 11212.52 IOPS, 43.80 MiB/s [2024-11-20T16:09:55.784Z] 11287.36 IOPS, 44.09 MiB/s [2024-11-20T16:09:55.784Z] 11485.13 IOPS, 44.86 MiB/s [2024-11-20T16:09:55.784Z] 11699.00 IOPS, 45.70 MiB/s [2024-11-20T16:09:55.784Z] [2024-11-20 17:09:53.286010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:53.286210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.608 [2024-11-20 17:09:53.286226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.608 [2024-11-20 17:09:53.286252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.608 [2024-11-20 17:09:53.286257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.286597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.286602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.287723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.609 [2024-11-20 17:09:53.287740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.609 [2024-11-20 17:09:53.287753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.609 [2024-11-20 17:09:53.287758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.287872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.287887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.287903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.287919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.287991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.287996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.610 [2024-11-20 17:09:53.288499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.610 [2024-11-20 17:09:53.288575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.610 [2024-11-20 17:09:53.288581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.288912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.288922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.288928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.611 [2024-11-20 17:09:53.289384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.611 [2024-11-20 17:09:53.289463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.611 [2024-11-20 17:09:53.289473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.289489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.289504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.289522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.289537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.289553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.289570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.289575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.612 [2024-11-20 17:09:53.290687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.612 [2024-11-20 17:09:53.290751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.612 [2024-11-20 17:09:53.290761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.290766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.290781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.290797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.290813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.290828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.290844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.290861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.290877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.290892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.290902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.290908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.291490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.291508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.291523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.291539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.291664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.291670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.292396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.292413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.292429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.292446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.613 [2024-11-20 17:09:53.292462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.292478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.292494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.292510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.613 [2024-11-20 17:09:53.292538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.613 [2024-11-20 17:09:53.292543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.292559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.292576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.292592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.292607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.292623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.292922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.292939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.292957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.292973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.292989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.292999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.293005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.293023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.293103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.293119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.293135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.614 [2024-11-20 17:09:53.293202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.614 [2024-11-20 17:09:53.293320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.614 [2024-11-20 17:09:53.293330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.293335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.293346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.293351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.293361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.293366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.293376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.304117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.304171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.304183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.304197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.304204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.304222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.304230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.304244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.304251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.304265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.304273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.305370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.305394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.305518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.305538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.305563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.615 [2024-11-20 17:09:53.305583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.615 [2024-11-20 17:09:53.305645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.615 [2024-11-20 17:09:53.305659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.305769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.305792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.305815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.305879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.305900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.305942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.305983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.305997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.306004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.306018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.306026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.306041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.306048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.306511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.306526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.306542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.306549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.306563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.306571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.308463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.308486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.308507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.308528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.308549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.616 [2024-11-20 17:09:53.308569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.308590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.308611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.616 [2024-11-20 17:09:53.308632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.616 [2024-11-20 17:09:53.308645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.308949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.308990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.309011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.309053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.309073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.617 [2024-11-20 17:09:53.309226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.309246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.617 [2024-11-20 17:09:53.309261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.617 [2024-11-20 17:09:53.309268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.310832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.310985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.310999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.311027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.311211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.311232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.311972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.311988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.311996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.312017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.312038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.312059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.312080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.312102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.312123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.312144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.312172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.312193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.312214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.618 [2024-11-20 17:09:53.312235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.618 [2024-11-20 17:09:53.312258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.618 [2024-11-20 17:09:53.312271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.312321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.312468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.312489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.312512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.312526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.312534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.313690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.313712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.313812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.313893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.313913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.619 [2024-11-20 17:09:53.313932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.313983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.313990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.314003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.619 [2024-11-20 17:09:53.314009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.619 [2024-11-20 17:09:53.314023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.314193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.314254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.314273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.314293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.314313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.314327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.314333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.620 [2024-11-20 17:09:53.316884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.316907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.316932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.620 [2024-11-20 17:09:53.316954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.620 [2024-11-20 17:09:53.316971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.316979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.316995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.621 [2024-11-20 17:09:53.317554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.621 [2024-11-20 17:09:53.317659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.621 [2024-11-20 17:09:53.317672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.317680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.317700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.317719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.317740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.317759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.317780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.317801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.317822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.317841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.317857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.317864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.622 [2024-11-20 17:09:53.320485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.622 [2024-11-20 17:09:53.320525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.622 [2024-11-20 17:09:53.320537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.320564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.320642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.320742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.320762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.320821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.320841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.320914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.320921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.321554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.321657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.321677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.321697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.623 [2024-11-20 17:09:53.321717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.623 [2024-11-20 17:09:53.321789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.623 [2024-11-20 17:09:53.321796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.322107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.322130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.322278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.322298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.322311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.322319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.624 [2024-11-20 17:09:53.323806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.624 [2024-11-20 17:09:53.323826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.624 [2024-11-20 17:09:53.323839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.323845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.323865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.323887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.323906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.323926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.323946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.323967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.323980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.323987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.324001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.324008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.324022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.324029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.324041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.324048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.324061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.324068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.324081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.325456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.325579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.325599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.325618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.325671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.325678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.326140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.326157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.326180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.326196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.326213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.625 [2024-11-20 17:09:53.326229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.625 [2024-11-20 17:09:53.326245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.625 [2024-11-20 17:09:53.326256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.626 [2024-11-20 17:09:53.326587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.626 [2024-11-20 17:09:53.326630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.626 [2024-11-20 17:09:53.326636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.326647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.326652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.326663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.326668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.326679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.326685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.327958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.327986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.327991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.328007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.328023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.328039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.328107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.627 [2024-11-20 17:09:53.328192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.627 [2024-11-20 17:09:53.328204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.627 [2024-11-20 17:09:53.328209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.328219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.328225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.328235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.328241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.328253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.328259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.328270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.328276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.328287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.328292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.328303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.328308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.628 [2024-11-20 17:09:53.329911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.628 [2024-11-20 17:09:53.329953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.628 [2024-11-20 17:09:53.329958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.329969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.329974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.329985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.329990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.330006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.330021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.330037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.330054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.330070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.330086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.330103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.330120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.330130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.330136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.629 [2024-11-20 17:09:53.331625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.629 [2024-11-20 17:09:53.331682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.629 [2024-11-20 17:09:53.331689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.331699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.331705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.331721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.332707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.332790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.332949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.332965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.332981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.332992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.332998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.333014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.333030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.333046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.333062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.333080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.333095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.333112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.333128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.630 [2024-11-20 17:09:53.333144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.333164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.333181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.630 [2024-11-20 17:09:53.333191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.630 [2024-11-20 17:09:53.333197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.333213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.333786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.333804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.333820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.333839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.333856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.333872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.333888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.333899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.333905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.334475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.334504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.334510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.335685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.335702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.335718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.335734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.335749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.335764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.335780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-11-20 17:09:53.335812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.631 [2024-11-20 17:09:53.335822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.631 [2024-11-20 17:09:53.335827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.335911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.335958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.335975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.335985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.335991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.632 [2024-11-20 17:09:53.336353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.632 [2024-11-20 17:09:53.336364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-11-20 17:09:53.336370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.336386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.336906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.336924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.336940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.336957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.336976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.336986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.336992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.337137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.337153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.337168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.337176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.338758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.338773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.338820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.338885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.633 [2024-11-20 17:09:53.338900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.633 [2024-11-20 17:09:53.338910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.633 [2024-11-20 17:09:53.338915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.338925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.338930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.338940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.338946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.338956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.338961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.338972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.338977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.338987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.338993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.634 [2024-11-20 17:09:53.339790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.634 [2024-11-20 17:09:53.339864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.634 [2024-11-20 17:09:53.339869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.339885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.339901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.339917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.339936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.339953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.339968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.339984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.339995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.340000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.340016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.340129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.340145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.340966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.340983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.340993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.340999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.341015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.341032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.341047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.341064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.341080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.341096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.341114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.341130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.635 [2024-11-20 17:09:53.341146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.635 [2024-11-20 17:09:53.341161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.635 [2024-11-20 17:09:53.341167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.341443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.341485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.341490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.342546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.342563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.636 [2024-11-20 17:09:53.342579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.342686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.342691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.343089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.636 [2024-11-20 17:09:53.343098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.636 [2024-11-20 17:09:53.343110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.343301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.343330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.343335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.344252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.344329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.344344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.344359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.637 [2024-11-20 17:09:53.344394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.344409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.637 [2024-11-20 17:09:53.344420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.637 [2024-11-20 17:09:53.344425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.344753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.344765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.344772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.346758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.346773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.638 [2024-11-20 17:09:53.346804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.638 [2024-11-20 17:09:53.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.638 [2024-11-20 17:09:53.346925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.346930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.346940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.346946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.346956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.346961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.346972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.346977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.346989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.346994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.639 [2024-11-20 17:09:53.347419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.639 [2024-11-20 17:09:53.347447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.639 [2024-11-20 17:09:53.347452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.348546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.348557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.348563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.349443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.349459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.349474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.349490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.349506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.640 [2024-11-20 17:09:53.349522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.640 [2024-11-20 17:09:53.349944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.640 [2024-11-20 17:09:53.349956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.349962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.349972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.349978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.349994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.350357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.350415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.350420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.351036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.351059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.641 [2024-11-20 17:09:53.351065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.351075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.351081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.351091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.641 [2024-11-20 17:09:53.351101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.641 [2024-11-20 17:09:53.351112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.351382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.351988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.352004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.352011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.352023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.642 [2024-11-20 17:09:53.352029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.352040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.642 [2024-11-20 17:09:53.352045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.642 [2024-11-20 17:09:53.352056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.352062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.352078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.352094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.352110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.352127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.352143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.352163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.352180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.352196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.352206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.352212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.643 [2024-11-20 17:09:53.353439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.643 [2024-11-20 17:09:53.353471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.643 [2024-11-20 17:09:53.353481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.353486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.353501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.353517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.353549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.353564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.353596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.353611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.353626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.353654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.353659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.644 [2024-11-20 17:09:53.355372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.644 [2024-11-20 17:09:53.355403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.644 [2024-11-20 17:09:53.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.355420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.355430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.355436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.355447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.355453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.355464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.355471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.645 [2024-11-20 17:09:53.356866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.645 [2024-11-20 17:09:53.356881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.645 [2024-11-20 17:09:53.356891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.356907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.356912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.356923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.356928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.356938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.356943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.356953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.356960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.356971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.356977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.356988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.356994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.357004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.357010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.358966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.358980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.358991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.358997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.359044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.646 [2024-11-20 17:09:53.359140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.359156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.359177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.646 [2024-11-20 17:09:53.359188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.646 [2024-11-20 17:09:53.359193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.646 11817.44 IOPS, 46.16 MiB/s [2024-11-20T16:09:55.822Z] 11852.58 IOPS, 46.30 MiB/s [2024-11-20T16:09:55.822Z] Received shutdown signal, test time was about 26.868408 seconds 00:27:03.646 00:27:03.646 Latency(us) 00:27:03.646 [2024-11-20T16:09:55.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.646 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:03.646 Verification LBA range: start 0x0 length 0x4000 00:27:03.646 Nvme0n1 : 26.87 11860.85 46.33 0.00 0.00 10757.92 802.13 3019898.88 00:27:03.646 [2024-11-20T16:09:55.822Z] =================================================================================================================== 00:27:03.646 [2024-11-20T16:09:55.822Z] Total : 11860.85 46.33 0.00 0.00 10757.92 802.13 3019898.88 00:27:03.646 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:03.908 rmmod nvme_tcp 00:27:03.908 rmmod nvme_fabrics 00:27:03.908 rmmod nvme_keyring 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2093070 ']' 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2093070 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2093070 ']' 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2093070 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2093070 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2093070' 00:27:03.908 killing process with pid 2093070 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2093070 00:27:03.908 17:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2093070 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.170 17:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:06.084 00:27:06.084 real 0m41.323s 00:27:06.084 user 1m46.911s 00:27:06.084 sys 0m11.475s 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.084 ************************************ 00:27:06.084 END TEST nvmf_host_multipath_status 00:27:06.084 ************************************ 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.084 17:09:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.346 ************************************ 00:27:06.346 START TEST nvmf_discovery_remove_ifc 00:27:06.346 ************************************ 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:06.346 * Looking for test storage... 00:27:06.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:06.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.346 --rc genhtml_branch_coverage=1 00:27:06.346 --rc genhtml_function_coverage=1 00:27:06.346 --rc genhtml_legend=1 00:27:06.346 --rc geninfo_all_blocks=1 00:27:06.346 --rc geninfo_unexecuted_blocks=1 00:27:06.346 00:27:06.346 ' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:06.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.346 --rc genhtml_branch_coverage=1 00:27:06.346 --rc genhtml_function_coverage=1 00:27:06.346 --rc genhtml_legend=1 00:27:06.346 --rc geninfo_all_blocks=1 00:27:06.346 --rc geninfo_unexecuted_blocks=1 00:27:06.346 00:27:06.346 ' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:06.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.346 --rc genhtml_branch_coverage=1 00:27:06.346 --rc genhtml_function_coverage=1 00:27:06.346 --rc genhtml_legend=1 00:27:06.346 --rc geninfo_all_blocks=1 00:27:06.346 --rc geninfo_unexecuted_blocks=1 00:27:06.346 00:27:06.346 ' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:06.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.346 --rc genhtml_branch_coverage=1 00:27:06.346 --rc genhtml_function_coverage=1 00:27:06.346 --rc genhtml_legend=1 00:27:06.346 --rc geninfo_all_blocks=1 00:27:06.346 --rc geninfo_unexecuted_blocks=1 00:27:06.346 00:27:06.346 ' 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:06.346 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:06.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.347 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.608 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.608 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:06.608 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:06.608 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.608 17:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.756 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:14.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:14.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:14.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:14.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.757 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:27:14.758 00:27:14.758 --- 10.0.0.2 ping statistics --- 00:27:14.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.758 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:27:14.758 00:27:14.758 --- 10.0.0.1 ping statistics --- 00:27:14.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.758 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.758 17:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2103541 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2103541 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2103541 ']' 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.758 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.758 [2024-11-20 17:10:06.103827] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:27:14.758 [2024-11-20 17:10:06.103892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.758 [2024-11-20 17:10:06.205312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.758 [2024-11-20 17:10:06.254634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.758 [2024-11-20 17:10:06.254685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.758 [2024-11-20 17:10:06.254694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.758 [2024-11-20 17:10:06.254701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.758 [2024-11-20 17:10:06.254707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.758 [2024-11-20 17:10:06.255484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.019 17:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.019 [2024-11-20 17:10:06.991320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.019 [2024-11-20 17:10:06.999634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:15.019 null0 00:27:15.019 [2024-11-20 17:10:07.031539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2103622 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2103622 /tmp/host.sock 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2103622 ']' 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:15.020 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.020 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.020 [2024-11-20 17:10:07.110517] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:27:15.020 [2024-11-20 17:10:07.110584] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103622 ] 00:27:15.281 [2024-11-20 17:10:07.200901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.281 [2024-11-20 17:10:07.254278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.851 17:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.851 17:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.851 17:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:15.851 17:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.851 17:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.239 [2024-11-20 17:10:09.039751] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:17.239 [2024-11-20 17:10:09.039772] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:17.239 [2024-11-20 17:10:09.039786] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.239 [2024-11-20 17:10:09.127072] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:17.239 [2024-11-20 17:10:09.312333] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:17.239 [2024-11-20 17:10:09.313323] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x223c410:1 started. 00:27:17.239 [2024-11-20 17:10:09.314873] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:17.239 [2024-11-20 17:10:09.314917] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:17.239 [2024-11-20 17:10:09.314939] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:17.239 [2024-11-20 17:10:09.314953] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:17.239 [2024-11-20 17:10:09.314974] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.239 [2024-11-20 17:10:09.359896] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x223c410 was disconnected and freed. delete nvme_qpair. 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:17.239 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:17.500 17:10:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:18.444 17:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:19.827 17:10:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:20.768 17:10:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:21.710 17:10:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:22.653 [2024-11-20 17:10:14.755533] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:22.653 [2024-11-20 17:10:14.755572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.653 [2024-11-20 17:10:14.755582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.653 [2024-11-20 17:10:14.755590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.653 [2024-11-20 17:10:14.755595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.653 [2024-11-20 17:10:14.755601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.653 [2024-11-20 17:10:14.755606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.653 [2024-11-20 17:10:14.755611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.653 [2024-11-20 17:10:14.755617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.653 [2024-11-20 17:10:14.755623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.653 [2024-11-20 17:10:14.755632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.653 [2024-11-20 17:10:14.755637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218c00 is same with the state(6) to be set 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.653 [2024-11-20 17:10:14.765555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2218c00 (9): Bad file descriptor 00:27:22.653 [2024-11-20 17:10:14.775589] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.653 [2024-11-20 17:10:14.775599] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.653 [2024-11-20 17:10:14.775602] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.653 [2024-11-20 17:10:14.775607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.653 [2024-11-20 17:10:14.775629] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:22.653 17:10:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.036 [2024-11-20 17:10:15.835289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:24.036 [2024-11-20 17:10:15.835378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2218c00 with addr=10.0.0.2, port=4420 00:27:24.036 [2024-11-20 17:10:15.835409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218c00 is same with the state(6) to be set 00:27:24.036 [2024-11-20 17:10:15.835463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2218c00 (9): Bad file descriptor 00:27:24.036 [2024-11-20 17:10:15.836579] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:24.036 [2024-11-20 17:10:15.836651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.036 [2024-11-20 17:10:15.836674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.036 [2024-11-20 17:10:15.836708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.036 [2024-11-20 17:10:15.836730] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.037 [2024-11-20 17:10:15.836747] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.037 [2024-11-20 17:10:15.836761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.037 [2024-11-20 17:10:15.836783] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.037 [2024-11-20 17:10:15.836798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.037 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.037 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:24.037 17:10:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.978 [2024-11-20 17:10:16.839221] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.978 [2024-11-20 17:10:16.839237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.978 [2024-11-20 17:10:16.839247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.978 [2024-11-20 17:10:16.839252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.978 [2024-11-20 17:10:16.839258] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:24.978 [2024-11-20 17:10:16.839264] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.978 [2024-11-20 17:10:16.839268] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.978 [2024-11-20 17:10:16.839271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.978 [2024-11-20 17:10:16.839289] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:24.978 [2024-11-20 17:10:16.839307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.978 [2024-11-20 17:10:16.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.978 [2024-11-20 17:10:16.839323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.978 [2024-11-20 17:10:16.839329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.979 [2024-11-20 17:10:16.839335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.979 [2024-11-20 17:10:16.839342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.979 [2024-11-20 17:10:16.839348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.979 [2024-11-20 17:10:16.839354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.979 [2024-11-20 17:10:16.839360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.979 [2024-11-20 17:10:16.839365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.979 [2024-11-20 17:10:16.839371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:24.979 [2024-11-20 17:10:16.839779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2208340 (9): Bad file descriptor 00:27:24.979 [2024-11-20 17:10:16.840789] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:24.979 [2024-11-20 17:10:16.840799] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.979 17:10:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:24.979 17:10:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.923 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.184 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:26.184 17:10:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.755 [2024-11-20 17:10:18.853381] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:26.756 [2024-11-20 17:10:18.853394] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:26.756 [2024-11-20 17:10:18.853404] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:27.016 [2024-11-20 17:10:18.941667] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.016 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.016 [2024-11-20 17:10:19.165784] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:27.016 [2024-11-20 17:10:19.166589] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2217a60:1 started. 00:27:27.016 [2024-11-20 17:10:19.167481] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:27.016 [2024-11-20 17:10:19.167510] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:27.017 [2024-11-20 17:10:19.167526] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:27.017 [2024-11-20 17:10:19.167536] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:27.017 [2024-11-20 17:10:19.167542] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:27.017 [2024-11-20 17:10:19.173405] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2217a60 was disconnected and freed. delete nvme_qpair. 00:27:27.017 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:27.017 17:10:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2103622 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2103622 ']' 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2103622 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2103622 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2103622' 00:27:28.400 killing process with pid 2103622 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2103622 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2103622 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.400 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.401 rmmod nvme_tcp 00:27:28.401 rmmod nvme_fabrics 00:27:28.401 rmmod nvme_keyring 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2103541 ']' 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2103541 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2103541 ']' 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2103541 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2103541 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2103541' 00:27:28.401 killing process with pid 2103541 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2103541 00:27:28.401 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2103541 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.662 17:10:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.578 17:10:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:30.578 00:27:30.578 real 0m24.449s 00:27:30.578 user 0m29.516s 00:27:30.578 sys 0m7.191s 00:27:30.578 17:10:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.578 17:10:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.578 ************************************ 00:27:30.578 END TEST nvmf_discovery_remove_ifc 00:27:30.578 ************************************ 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.840 ************************************ 00:27:30.840 START TEST nvmf_identify_kernel_target 00:27:30.840 ************************************ 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:30.840 * Looking for test storage... 00:27:30.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:30.840 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:30.841 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.841 17:10:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.841 --rc genhtml_branch_coverage=1 00:27:30.841 --rc genhtml_function_coverage=1 00:27:30.841 --rc genhtml_legend=1 00:27:30.841 --rc geninfo_all_blocks=1 00:27:30.841 --rc geninfo_unexecuted_blocks=1 00:27:30.841 00:27:30.841 ' 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.841 --rc genhtml_branch_coverage=1 00:27:30.841 --rc genhtml_function_coverage=1 00:27:30.841 --rc genhtml_legend=1 00:27:30.841 --rc geninfo_all_blocks=1 00:27:30.841 --rc geninfo_unexecuted_blocks=1 00:27:30.841 00:27:30.841 ' 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.841 --rc genhtml_branch_coverage=1 00:27:30.841 --rc genhtml_function_coverage=1 00:27:30.841 --rc genhtml_legend=1 00:27:30.841 --rc geninfo_all_blocks=1 00:27:30.841 --rc geninfo_unexecuted_blocks=1 00:27:30.841 00:27:30.841 ' 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.841 --rc genhtml_branch_coverage=1 00:27:30.841 --rc genhtml_function_coverage=1 00:27:30.841 --rc genhtml_legend=1 00:27:30.841 --rc geninfo_all_blocks=1 00:27:30.841 --rc geninfo_unexecuted_blocks=1 00:27:30.841 00:27:30.841 ' 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.841 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:31.102 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.103 17:10:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:39.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:39.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:39.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:39.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:39.251 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:27:39.252 00:27:39.252 --- 10.0.0.2 ping statistics --- 00:27:39.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.252 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:27:39.252 00:27:39.252 --- 10.0.0.1 ping statistics --- 00:27:39.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.252 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:39.252 17:10:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:41.802 Waiting for block devices as requested 00:27:42.063 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:42.063 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:42.063 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:42.324 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:42.324 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:42.324 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:42.584 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:42.584 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:42.584 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:42.845 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:42.845 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.105 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.105 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.105 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.366 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.366 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.366 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:43.627 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:43.889 No valid GPT data, bailing 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:43.889 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:43.890 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:43.890 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:43.890 00:27:43.890 Discovery Log Number of Records 2, Generation counter 2 00:27:43.890 =====Discovery Log Entry 0====== 00:27:43.890 trtype: tcp 00:27:43.890 adrfam: ipv4 00:27:43.890 subtype: current discovery subsystem 00:27:43.890 treq: not specified, sq flow control disable supported 00:27:43.890 portid: 1 00:27:43.890 trsvcid: 4420 00:27:43.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:43.890 traddr: 10.0.0.1 00:27:43.890 eflags: none 00:27:43.890 sectype: none 00:27:43.890 =====Discovery Log Entry 1====== 00:27:43.890 trtype: tcp 00:27:43.890 adrfam: ipv4 00:27:43.890 subtype: nvme subsystem 00:27:43.890 treq: not specified, sq flow control disable supported 00:27:43.890 portid: 1 00:27:43.890 trsvcid: 4420 00:27:43.890 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:43.890 traddr: 10.0.0.1 00:27:43.890 eflags: none 00:27:43.890 sectype: none 00:27:43.890 17:10:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:43.890 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:43.890 ===================================================== 00:27:43.890 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:43.890 ===================================================== 00:27:43.890 Controller Capabilities/Features 00:27:43.890 ================================ 00:27:43.890 Vendor ID: 0000 00:27:43.890 Subsystem Vendor ID: 0000 00:27:43.890 Serial Number: cc1c356d3351c8caf77f 00:27:43.890 Model Number: Linux 00:27:43.890 Firmware Version: 6.8.9-20 00:27:43.890 Recommended Arb Burst: 0 00:27:43.890 IEEE OUI Identifier: 00 00 00 00:27:43.890 Multi-path I/O 00:27:43.890 May have multiple subsystem ports: No 00:27:43.890 May have multiple controllers: No 00:27:43.890 Associated with SR-IOV VF: No 00:27:43.890 Max Data Transfer Size: Unlimited 00:27:43.890 Max Number of Namespaces: 0 00:27:43.890 Max Number of I/O Queues: 1024 00:27:43.890 NVMe Specification Version (VS): 1.3 00:27:43.890 NVMe Specification Version (Identify): 1.3 00:27:43.890 Maximum Queue Entries: 1024 00:27:43.890 Contiguous Queues Required: No 00:27:43.890 Arbitration Mechanisms Supported 00:27:43.890 Weighted Round Robin: Not Supported 00:27:43.890 Vendor Specific: Not Supported 00:27:43.890 Reset Timeout: 7500 ms 00:27:43.890 Doorbell Stride: 4 bytes 00:27:43.890 NVM Subsystem Reset: Not Supported 00:27:43.890 Command Sets Supported 00:27:43.890 NVM Command Set: Supported 00:27:43.890 Boot Partition: Not Supported 00:27:43.890 Memory Page Size Minimum: 4096 bytes 00:27:43.890 Memory Page Size Maximum: 4096 bytes 00:27:43.890 Persistent Memory Region: Not Supported 00:27:43.890 Optional Asynchronous Events Supported 00:27:43.890 Namespace Attribute Notices: Not Supported 00:27:43.890 Firmware Activation Notices: Not Supported 00:27:43.890 ANA Change Notices: Not Supported 00:27:43.890 PLE Aggregate Log Change Notices: Not Supported 00:27:43.890 LBA Status Info Alert Notices: Not Supported 00:27:43.890 EGE Aggregate Log Change Notices: Not Supported 00:27:43.890 Normal NVM Subsystem Shutdown event: Not Supported 00:27:43.890 Zone Descriptor Change Notices: Not Supported 00:27:43.890 Discovery Log Change Notices: Supported 00:27:43.890 Controller Attributes 00:27:43.890 128-bit Host Identifier: Not Supported 00:27:43.890 Non-Operational Permissive Mode: Not Supported 00:27:43.890 NVM Sets: Not Supported 00:27:43.890 Read Recovery Levels: Not Supported 00:27:43.890 Endurance Groups: Not Supported 00:27:43.890 Predictable Latency Mode: Not Supported 00:27:43.890 Traffic Based Keep ALive: Not Supported 00:27:43.890 Namespace Granularity: Not Supported 00:27:43.890 SQ Associations: Not Supported 00:27:43.890 UUID List: Not Supported 00:27:43.890 Multi-Domain Subsystem: Not Supported 00:27:43.890 Fixed Capacity Management: Not Supported 00:27:43.890 Variable Capacity Management: Not Supported 00:27:43.890 Delete Endurance Group: Not Supported 00:27:43.890 Delete NVM Set: Not Supported 00:27:43.890 Extended LBA Formats Supported: Not Supported 00:27:43.890 Flexible Data Placement Supported: Not Supported 00:27:43.890 00:27:43.890 Controller Memory Buffer Support 00:27:43.890 ================================ 00:27:43.890 Supported: No 00:27:43.890 00:27:43.890 Persistent Memory Region Support 00:27:43.890 ================================ 00:27:43.890 Supported: No 00:27:43.890 00:27:43.890 Admin Command Set Attributes 00:27:43.890 ============================ 00:27:43.890 Security Send/Receive: Not Supported 00:27:43.890 Format NVM: Not Supported 00:27:43.890 Firmware Activate/Download: Not Supported 00:27:43.890 Namespace Management: Not Supported 00:27:43.890 Device Self-Test: Not Supported 00:27:43.890 Directives: Not Supported 00:27:43.890 NVMe-MI: Not Supported 00:27:43.890 Virtualization Management: Not Supported 00:27:43.890 Doorbell Buffer Config: Not Supported 00:27:43.890 Get LBA Status Capability: Not Supported 00:27:43.890 Command & Feature Lockdown Capability: Not Supported 00:27:43.890 Abort Command Limit: 1 00:27:43.890 Async Event Request Limit: 1 00:27:43.890 Number of Firmware Slots: N/A 00:27:43.890 Firmware Slot 1 Read-Only: N/A 00:27:44.153 Firmware Activation Without Reset: N/A 00:27:44.153 Multiple Update Detection Support: N/A 00:27:44.153 Firmware Update Granularity: No Information Provided 00:27:44.153 Per-Namespace SMART Log: No 00:27:44.153 Asymmetric Namespace Access Log Page: Not Supported 00:27:44.153 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:44.153 Command Effects Log Page: Not Supported 00:27:44.153 Get Log Page Extended Data: Supported 00:27:44.153 Telemetry Log Pages: Not Supported 00:27:44.153 Persistent Event Log Pages: Not Supported 00:27:44.153 Supported Log Pages Log Page: May Support 00:27:44.153 Commands Supported & Effects Log Page: Not Supported 00:27:44.153 Feature Identifiers & Effects Log Page:May Support 00:27:44.153 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.153 Data Area 4 for Telemetry Log: Not Supported 00:27:44.153 Error Log Page Entries Supported: 1 00:27:44.153 Keep Alive: Not Supported 00:27:44.153 00:27:44.153 NVM Command Set Attributes 00:27:44.153 ========================== 00:27:44.153 Submission Queue Entry Size 00:27:44.153 Max: 1 00:27:44.153 Min: 1 00:27:44.153 Completion Queue Entry Size 00:27:44.153 Max: 1 00:27:44.153 Min: 1 00:27:44.153 Number of Namespaces: 0 00:27:44.153 Compare Command: Not Supported 00:27:44.153 Write Uncorrectable Command: Not Supported 00:27:44.153 Dataset Management Command: Not Supported 00:27:44.153 Write Zeroes Command: Not Supported 00:27:44.153 Set Features Save Field: Not Supported 00:27:44.153 Reservations: Not Supported 00:27:44.153 Timestamp: Not Supported 00:27:44.153 Copy: Not Supported 00:27:44.153 Volatile Write Cache: Not Present 00:27:44.153 Atomic Write Unit (Normal): 1 00:27:44.153 Atomic Write Unit (PFail): 1 00:27:44.153 Atomic Compare & Write Unit: 1 00:27:44.153 Fused Compare & Write: Not Supported 00:27:44.153 Scatter-Gather List 00:27:44.153 SGL Command Set: Supported 00:27:44.153 SGL Keyed: Not Supported 00:27:44.153 SGL Bit Bucket Descriptor: Not Supported 00:27:44.153 SGL Metadata Pointer: Not Supported 00:27:44.153 Oversized SGL: Not Supported 00:27:44.153 SGL Metadata Address: Not Supported 00:27:44.153 SGL Offset: Supported 00:27:44.153 Transport SGL Data Block: Not Supported 00:27:44.153 Replay Protected Memory Block: Not Supported 00:27:44.153 00:27:44.153 Firmware Slot Information 00:27:44.153 ========================= 00:27:44.153 Active slot: 0 00:27:44.153 00:27:44.153 00:27:44.153 Error Log 00:27:44.153 ========= 00:27:44.153 00:27:44.153 Active Namespaces 00:27:44.153 ================= 00:27:44.153 Discovery Log Page 00:27:44.153 ================== 00:27:44.153 Generation Counter: 2 00:27:44.153 Number of Records: 2 00:27:44.153 Record Format: 0 00:27:44.153 00:27:44.153 Discovery Log Entry 0 00:27:44.153 ---------------------- 00:27:44.153 Transport Type: 3 (TCP) 00:27:44.153 Address Family: 1 (IPv4) 00:27:44.153 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:44.153 Entry Flags: 00:27:44.153 Duplicate Returned Information: 0 00:27:44.153 Explicit Persistent Connection Support for Discovery: 0 00:27:44.153 Transport Requirements: 00:27:44.153 Secure Channel: Not Specified 00:27:44.153 Port ID: 1 (0x0001) 00:27:44.153 Controller ID: 65535 (0xffff) 00:27:44.153 Admin Max SQ Size: 32 00:27:44.153 Transport Service Identifier: 4420 00:27:44.153 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:44.153 Transport Address: 10.0.0.1 00:27:44.153 Discovery Log Entry 1 00:27:44.153 ---------------------- 00:27:44.153 Transport Type: 3 (TCP) 00:27:44.153 Address Family: 1 (IPv4) 00:27:44.153 Subsystem Type: 2 (NVM Subsystem) 00:27:44.153 Entry Flags: 00:27:44.153 Duplicate Returned Information: 0 00:27:44.153 Explicit Persistent Connection Support for Discovery: 0 00:27:44.153 Transport Requirements: 00:27:44.153 Secure Channel: Not Specified 00:27:44.153 Port ID: 1 (0x0001) 00:27:44.153 Controller ID: 65535 (0xffff) 00:27:44.153 Admin Max SQ Size: 32 00:27:44.153 Transport Service Identifier: 4420 00:27:44.153 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:44.153 Transport Address: 10.0.0.1 00:27:44.153 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:44.153 get_feature(0x01) failed 00:27:44.153 get_feature(0x02) failed 00:27:44.153 get_feature(0x04) failed 00:27:44.153 ===================================================== 00:27:44.153 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:44.153 ===================================================== 00:27:44.153 Controller Capabilities/Features 00:27:44.153 ================================ 00:27:44.153 Vendor ID: 0000 00:27:44.153 Subsystem Vendor ID: 0000 00:27:44.153 Serial Number: f226dc115c5e09349ee9 00:27:44.153 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:44.153 Firmware Version: 6.8.9-20 00:27:44.153 Recommended Arb Burst: 6 00:27:44.153 IEEE OUI Identifier: 00 00 00 00:27:44.153 Multi-path I/O 00:27:44.153 May have multiple subsystem ports: Yes 00:27:44.153 May have multiple controllers: Yes 00:27:44.154 Associated with SR-IOV VF: No 00:27:44.154 Max Data Transfer Size: Unlimited 00:27:44.154 Max Number of Namespaces: 1024 00:27:44.154 Max Number of I/O Queues: 128 00:27:44.154 NVMe Specification Version (VS): 1.3 00:27:44.154 NVMe Specification Version (Identify): 1.3 00:27:44.154 Maximum Queue Entries: 1024 00:27:44.154 Contiguous Queues Required: No 00:27:44.154 Arbitration Mechanisms Supported 00:27:44.154 Weighted Round Robin: Not Supported 00:27:44.154 Vendor Specific: Not Supported 00:27:44.154 Reset Timeout: 7500 ms 00:27:44.154 Doorbell Stride: 4 bytes 00:27:44.154 NVM Subsystem Reset: Not Supported 00:27:44.154 Command Sets Supported 00:27:44.154 NVM Command Set: Supported 00:27:44.154 Boot Partition: Not Supported 00:27:44.154 Memory Page Size Minimum: 4096 bytes 00:27:44.154 Memory Page Size Maximum: 4096 bytes 00:27:44.154 Persistent Memory Region: Not Supported 00:27:44.154 Optional Asynchronous Events Supported 00:27:44.154 Namespace Attribute Notices: Supported 00:27:44.154 Firmware Activation Notices: Not Supported 00:27:44.154 ANA Change Notices: Supported 00:27:44.154 PLE Aggregate Log Change Notices: Not Supported 00:27:44.154 LBA Status Info Alert Notices: Not Supported 00:27:44.154 EGE Aggregate Log Change Notices: Not Supported 00:27:44.154 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.154 Zone Descriptor Change Notices: Not Supported 00:27:44.154 Discovery Log Change Notices: Not Supported 00:27:44.154 Controller Attributes 00:27:44.154 128-bit Host Identifier: Supported 00:27:44.154 Non-Operational Permissive Mode: Not Supported 00:27:44.154 NVM Sets: Not Supported 00:27:44.154 Read Recovery Levels: Not Supported 00:27:44.154 Endurance Groups: Not Supported 00:27:44.154 Predictable Latency Mode: Not Supported 00:27:44.154 Traffic Based Keep ALive: Supported 00:27:44.154 Namespace Granularity: Not Supported 00:27:44.154 SQ Associations: Not Supported 00:27:44.154 UUID List: Not Supported 00:27:44.154 Multi-Domain Subsystem: Not Supported 00:27:44.154 Fixed Capacity Management: Not Supported 00:27:44.154 Variable Capacity Management: Not Supported 00:27:44.154 Delete Endurance Group: Not Supported 00:27:44.154 Delete NVM Set: Not Supported 00:27:44.154 Extended LBA Formats Supported: Not Supported 00:27:44.154 Flexible Data Placement Supported: Not Supported 00:27:44.154 00:27:44.154 Controller Memory Buffer Support 00:27:44.154 ================================ 00:27:44.154 Supported: No 00:27:44.154 00:27:44.154 Persistent Memory Region Support 00:27:44.154 ================================ 00:27:44.154 Supported: No 00:27:44.154 00:27:44.154 Admin Command Set Attributes 00:27:44.154 ============================ 00:27:44.154 Security Send/Receive: Not Supported 00:27:44.154 Format NVM: Not Supported 00:27:44.154 Firmware Activate/Download: Not Supported 00:27:44.154 Namespace Management: Not Supported 00:27:44.154 Device Self-Test: Not Supported 00:27:44.154 Directives: Not Supported 00:27:44.154 NVMe-MI: Not Supported 00:27:44.154 Virtualization Management: Not Supported 00:27:44.154 Doorbell Buffer Config: Not Supported 00:27:44.154 Get LBA Status Capability: Not Supported 00:27:44.154 Command & Feature Lockdown Capability: Not Supported 00:27:44.154 Abort Command Limit: 4 00:27:44.154 Async Event Request Limit: 4 00:27:44.154 Number of Firmware Slots: N/A 00:27:44.154 Firmware Slot 1 Read-Only: N/A 00:27:44.154 Firmware Activation Without Reset: N/A 00:27:44.154 Multiple Update Detection Support: N/A 00:27:44.154 Firmware Update Granularity: No Information Provided 00:27:44.154 Per-Namespace SMART Log: Yes 00:27:44.154 Asymmetric Namespace Access Log Page: Supported 00:27:44.154 ANA Transition Time : 10 sec 00:27:44.154 00:27:44.154 Asymmetric Namespace Access Capabilities 00:27:44.154 ANA Optimized State : Supported 00:27:44.154 ANA Non-Optimized State : Supported 00:27:44.154 ANA Inaccessible State : Supported 00:27:44.154 ANA Persistent Loss State : Supported 00:27:44.154 ANA Change State : Supported 00:27:44.154 ANAGRPID is not changed : No 00:27:44.154 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:44.154 00:27:44.154 ANA Group Identifier Maximum : 128 00:27:44.154 Number of ANA Group Identifiers : 128 00:27:44.154 Max Number of Allowed Namespaces : 1024 00:27:44.154 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:44.154 Command Effects Log Page: Supported 00:27:44.154 Get Log Page Extended Data: Supported 00:27:44.154 Telemetry Log Pages: Not Supported 00:27:44.154 Persistent Event Log Pages: Not Supported 00:27:44.154 Supported Log Pages Log Page: May Support 00:27:44.154 Commands Supported & Effects Log Page: Not Supported 00:27:44.154 Feature Identifiers & Effects Log Page:May Support 00:27:44.154 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.154 Data Area 4 for Telemetry Log: Not Supported 00:27:44.154 Error Log Page Entries Supported: 128 00:27:44.154 Keep Alive: Supported 00:27:44.154 Keep Alive Granularity: 1000 ms 00:27:44.154 00:27:44.154 NVM Command Set Attributes 00:27:44.154 ========================== 00:27:44.154 Submission Queue Entry Size 00:27:44.154 Max: 64 00:27:44.154 Min: 64 00:27:44.154 Completion Queue Entry Size 00:27:44.154 Max: 16 00:27:44.154 Min: 16 00:27:44.154 Number of Namespaces: 1024 00:27:44.154 Compare Command: Not Supported 00:27:44.154 Write Uncorrectable Command: Not Supported 00:27:44.154 Dataset Management Command: Supported 00:27:44.154 Write Zeroes Command: Supported 00:27:44.154 Set Features Save Field: Not Supported 00:27:44.154 Reservations: Not Supported 00:27:44.154 Timestamp: Not Supported 00:27:44.154 Copy: Not Supported 00:27:44.154 Volatile Write Cache: Present 00:27:44.154 Atomic Write Unit (Normal): 1 00:27:44.154 Atomic Write Unit (PFail): 1 00:27:44.154 Atomic Compare & Write Unit: 1 00:27:44.154 Fused Compare & Write: Not Supported 00:27:44.154 Scatter-Gather List 00:27:44.154 SGL Command Set: Supported 00:27:44.154 SGL Keyed: Not Supported 00:27:44.154 SGL Bit Bucket Descriptor: Not Supported 00:27:44.154 SGL Metadata Pointer: Not Supported 00:27:44.154 Oversized SGL: Not Supported 00:27:44.154 SGL Metadata Address: Not Supported 00:27:44.154 SGL Offset: Supported 00:27:44.154 Transport SGL Data Block: Not Supported 00:27:44.154 Replay Protected Memory Block: Not Supported 00:27:44.154 00:27:44.154 Firmware Slot Information 00:27:44.154 ========================= 00:27:44.154 Active slot: 0 00:27:44.154 00:27:44.154 Asymmetric Namespace Access 00:27:44.154 =========================== 00:27:44.154 Change Count : 0 00:27:44.154 Number of ANA Group Descriptors : 1 00:27:44.154 ANA Group Descriptor : 0 00:27:44.154 ANA Group ID : 1 00:27:44.154 Number of NSID Values : 1 00:27:44.154 Change Count : 0 00:27:44.154 ANA State : 1 00:27:44.154 Namespace Identifier : 1 00:27:44.154 00:27:44.154 Commands Supported and Effects 00:27:44.154 ============================== 00:27:44.154 Admin Commands 00:27:44.154 -------------- 00:27:44.154 Get Log Page (02h): Supported 00:27:44.154 Identify (06h): Supported 00:27:44.154 Abort (08h): Supported 00:27:44.154 Set Features (09h): Supported 00:27:44.154 Get Features (0Ah): Supported 00:27:44.154 Asynchronous Event Request (0Ch): Supported 00:27:44.154 Keep Alive (18h): Supported 00:27:44.154 I/O Commands 00:27:44.154 ------------ 00:27:44.154 Flush (00h): Supported 00:27:44.154 Write (01h): Supported LBA-Change 00:27:44.154 Read (02h): Supported 00:27:44.154 Write Zeroes (08h): Supported LBA-Change 00:27:44.154 Dataset Management (09h): Supported 00:27:44.154 00:27:44.154 Error Log 00:27:44.154 ========= 00:27:44.154 Entry: 0 00:27:44.154 Error Count: 0x3 00:27:44.154 Submission Queue Id: 0x0 00:27:44.154 Command Id: 0x5 00:27:44.154 Phase Bit: 0 00:27:44.154 Status Code: 0x2 00:27:44.154 Status Code Type: 0x0 00:27:44.154 Do Not Retry: 1 00:27:44.154 Error Location: 0x28 00:27:44.154 LBA: 0x0 00:27:44.154 Namespace: 0x0 00:27:44.154 Vendor Log Page: 0x0 00:27:44.154 ----------- 00:27:44.154 Entry: 1 00:27:44.154 Error Count: 0x2 00:27:44.154 Submission Queue Id: 0x0 00:27:44.154 Command Id: 0x5 00:27:44.154 Phase Bit: 0 00:27:44.154 Status Code: 0x2 00:27:44.154 Status Code Type: 0x0 00:27:44.154 Do Not Retry: 1 00:27:44.154 Error Location: 0x28 00:27:44.154 LBA: 0x0 00:27:44.155 Namespace: 0x0 00:27:44.155 Vendor Log Page: 0x0 00:27:44.155 ----------- 00:27:44.155 Entry: 2 00:27:44.155 Error Count: 0x1 00:27:44.155 Submission Queue Id: 0x0 00:27:44.155 Command Id: 0x4 00:27:44.155 Phase Bit: 0 00:27:44.155 Status Code: 0x2 00:27:44.155 Status Code Type: 0x0 00:27:44.155 Do Not Retry: 1 00:27:44.155 Error Location: 0x28 00:27:44.155 LBA: 0x0 00:27:44.155 Namespace: 0x0 00:27:44.155 Vendor Log Page: 0x0 00:27:44.155 00:27:44.155 Number of Queues 00:27:44.155 ================ 00:27:44.155 Number of I/O Submission Queues: 128 00:27:44.155 Number of I/O Completion Queues: 128 00:27:44.155 00:27:44.155 ZNS Specific Controller Data 00:27:44.155 ============================ 00:27:44.155 Zone Append Size Limit: 0 00:27:44.155 00:27:44.155 00:27:44.155 Active Namespaces 00:27:44.155 ================= 00:27:44.155 get_feature(0x05) failed 00:27:44.155 Namespace ID:1 00:27:44.155 Command Set Identifier: NVM (00h) 00:27:44.155 Deallocate: Supported 00:27:44.155 Deallocated/Unwritten Error: Not Supported 00:27:44.155 Deallocated Read Value: Unknown 00:27:44.155 Deallocate in Write Zeroes: Not Supported 00:27:44.155 Deallocated Guard Field: 0xFFFF 00:27:44.155 Flush: Supported 00:27:44.155 Reservation: Not Supported 00:27:44.155 Namespace Sharing Capabilities: Multiple Controllers 00:27:44.155 Size (in LBAs): 3750748848 (1788GiB) 00:27:44.155 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:44.155 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:44.155 UUID: eee0ffe2-14da-4c69-b0a2-a0d619888b21 00:27:44.155 Thin Provisioning: Not Supported 00:27:44.155 Per-NS Atomic Units: Yes 00:27:44.155 Atomic Write Unit (Normal): 8 00:27:44.155 Atomic Write Unit (PFail): 8 00:27:44.155 Preferred Write Granularity: 8 00:27:44.155 Atomic Compare & Write Unit: 8 00:27:44.155 Atomic Boundary Size (Normal): 0 00:27:44.155 Atomic Boundary Size (PFail): 0 00:27:44.155 Atomic Boundary Offset: 0 00:27:44.155 NGUID/EUI64 Never Reused: No 00:27:44.155 ANA group ID: 1 00:27:44.155 Namespace Write Protected: No 00:27:44.155 Number of LBA Formats: 1 00:27:44.155 Current LBA Format: LBA Format #00 00:27:44.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:44.155 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.155 rmmod nvme_tcp 00:27:44.155 rmmod nvme_fabrics 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.155 17:10:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:46.701 17:10:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:50.004 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:50.004 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:50.577 00:27:50.577 real 0m19.674s 00:27:50.577 user 0m5.392s 00:27:50.577 sys 0m11.282s 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 ************************************ 00:27:50.577 END TEST nvmf_identify_kernel_target 00:27:50.577 ************************************ 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 ************************************ 00:27:50.577 START TEST nvmf_auth_host 00:27:50.577 ************************************ 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:50.577 * Looking for test storage... 00:27:50.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:50.577 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.839 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.840 --rc genhtml_branch_coverage=1 00:27:50.840 --rc genhtml_function_coverage=1 00:27:50.840 --rc genhtml_legend=1 00:27:50.840 --rc geninfo_all_blocks=1 00:27:50.840 --rc geninfo_unexecuted_blocks=1 00:27:50.840 00:27:50.840 ' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.840 --rc genhtml_branch_coverage=1 00:27:50.840 --rc genhtml_function_coverage=1 00:27:50.840 --rc genhtml_legend=1 00:27:50.840 --rc geninfo_all_blocks=1 00:27:50.840 --rc geninfo_unexecuted_blocks=1 00:27:50.840 00:27:50.840 ' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.840 --rc genhtml_branch_coverage=1 00:27:50.840 --rc genhtml_function_coverage=1 00:27:50.840 --rc genhtml_legend=1 00:27:50.840 --rc geninfo_all_blocks=1 00:27:50.840 --rc geninfo_unexecuted_blocks=1 00:27:50.840 00:27:50.840 ' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.840 --rc genhtml_branch_coverage=1 00:27:50.840 --rc genhtml_function_coverage=1 00:27:50.840 --rc genhtml_legend=1 00:27:50.840 --rc geninfo_all_blocks=1 00:27:50.840 --rc geninfo_unexecuted_blocks=1 00:27:50.840 00:27:50.840 ' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.840 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.841 17:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.990 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:58.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:58.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:58.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:58.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.991 17:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:27:58.991 00:27:58.991 --- 10.0.0.2 ping statistics --- 00:27:58.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.991 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:27:58.991 00:27:58.991 --- 10.0.0.1 ping statistics --- 00:27:58.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.991 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2118094 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2118094 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2118094 ']' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.991 17:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0e489b9175734ae3e614d94bbd4c23c7 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zO0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0e489b9175734ae3e614d94bbd4c23c7 0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0e489b9175734ae3e614d94bbd4c23c7 0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0e489b9175734ae3e614d94bbd4c23c7 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zO0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zO0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zO0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=079c9968ccbe96530c4763602a76a49c2b4066468e92cba66e17de518ebbb9b1 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gqq 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 079c9968ccbe96530c4763602a76a49c2b4066468e92cba66e17de518ebbb9b1 3 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 079c9968ccbe96530c4763602a76a49c2b4066468e92cba66e17de518ebbb9b1 3 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=079c9968ccbe96530c4763602a76a49c2b4066468e92cba66e17de518ebbb9b1 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gqq 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gqq 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gqq 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d843c09483fcd0620801754a788844c4aca3c6ad82203665 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hSt 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d843c09483fcd0620801754a788844c4aca3c6ad82203665 0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d843c09483fcd0620801754a788844c4aca3c6ad82203665 0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d843c09483fcd0620801754a788844c4aca3c6ad82203665 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:59.253 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hSt 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hSt 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hSt 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca4735ad11b8536901bc7e41bf508e3b7ed7eb50b12f3606 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WcE 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca4735ad11b8536901bc7e41bf508e3b7ed7eb50b12f3606 2 00:27:59.515 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca4735ad11b8536901bc7e41bf508e3b7ed7eb50b12f3606 2 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca4735ad11b8536901bc7e41bf508e3b7ed7eb50b12f3606 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WcE 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WcE 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WcE 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5bc3fa9038385bb77153c9695a269967 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fri 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5bc3fa9038385bb77153c9695a269967 1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5bc3fa9038385bb77153c9695a269967 1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5bc3fa9038385bb77153c9695a269967 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fri 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fri 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Fri 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7ad77a2e0481ec08619bc81174311dd0 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fkR 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7ad77a2e0481ec08619bc81174311dd0 1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7ad77a2e0481ec08619bc81174311dd0 1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7ad77a2e0481ec08619bc81174311dd0 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fkR 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fkR 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fkR 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:59.516 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9e80fae728f19101dfa4982c7275c8442259a5338a1c17c 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Y4F 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9e80fae728f19101dfa4982c7275c8442259a5338a1c17c 2 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9e80fae728f19101dfa4982c7275c8442259a5338a1c17c 2 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9e80fae728f19101dfa4982c7275c8442259a5338a1c17c 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Y4F 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Y4F 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Y4F 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a664f5c242e226a54d7b926a1bbc0413 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BIr 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a664f5c242e226a54d7b926a1bbc0413 0 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a664f5c242e226a54d7b926a1bbc0413 0 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a664f5c242e226a54d7b926a1bbc0413 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BIr 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BIr 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.BIr 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d4b9166247900e09ef60a26f8219f0151d5c6f38a92189356fd955c4dee35e3 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CWj 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d4b9166247900e09ef60a26f8219f0151d5c6f38a92189356fd955c4dee35e3 3 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d4b9166247900e09ef60a26f8219f0151d5c6f38a92189356fd955c4dee35e3 3 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d4b9166247900e09ef60a26f8219f0151d5c6f38a92189356fd955c4dee35e3 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CWj 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CWj 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.CWj 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2118094 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2118094 ']' 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.778 17:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zO0 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gqq ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gqq 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hSt 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WcE ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WcE 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Fri 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fkR ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fkR 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Y4F 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.BIr ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.BIr 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CWj 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:00.041 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:00.042 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:00.042 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:00.042 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:00.303 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:00.303 17:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:03.609 Waiting for block devices as requested 00:28:03.609 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:03.609 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:03.609 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:03.870 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:03.870 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:03.870 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:04.131 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:04.131 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:04.131 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:04.391 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:04.391 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:04.391 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:04.652 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:04.652 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:04.652 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:04.912 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:04.912 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:05.855 No valid GPT data, bailing 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:05.855 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:05.856 00:28:05.856 Discovery Log Number of Records 2, Generation counter 2 00:28:05.856 =====Discovery Log Entry 0====== 00:28:05.856 trtype: tcp 00:28:05.856 adrfam: ipv4 00:28:05.856 subtype: current discovery subsystem 00:28:05.856 treq: not specified, sq flow control disable supported 00:28:05.856 portid: 1 00:28:05.856 trsvcid: 4420 00:28:05.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:05.856 traddr: 10.0.0.1 00:28:05.856 eflags: none 00:28:05.856 sectype: none 00:28:05.856 =====Discovery Log Entry 1====== 00:28:05.856 trtype: tcp 00:28:05.856 adrfam: ipv4 00:28:05.856 subtype: nvme subsystem 00:28:05.856 treq: not specified, sq flow control disable supported 00:28:05.856 portid: 1 00:28:05.856 trsvcid: 4420 00:28:05.856 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:05.856 traddr: 10.0.0.1 00:28:05.856 eflags: none 00:28:05.856 sectype: none 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.856 17:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.118 nvme0n1 00:28:06.118 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.118 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.119 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.380 nvme0n1 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:06.380 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.381 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.642 nvme0n1 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.642 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.902 nvme0n1 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.902 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.903 17:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.903 nvme0n1 00:28:06.903 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.903 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.903 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.903 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.903 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.163 nvme0n1 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.163 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:07.424 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.425 nvme0n1 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.425 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.685 nvme0n1 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.685 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.944 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.944 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.944 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.945 17:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.945 nvme0n1 00:28:07.945 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.945 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.945 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.945 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.945 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.945 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.204 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.205 nvme0n1 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.205 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.464 nvme0n1 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.464 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.724 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.984 nvme0n1 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.985 17:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.985 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.248 nvme0n1 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.248 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.510 nvme0n1 00:28:09.511 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.511 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.511 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.511 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.511 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.511 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.770 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.770 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.770 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.770 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.770 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.770 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.771 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.031 nvme0n1 00:28:10.031 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.031 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.031 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.031 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.031 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.031 17:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.031 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.290 nvme0n1 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.290 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.857 nvme0n1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.857 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.858 17:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.428 nvme0n1 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.428 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.429 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.429 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 nvme0n1 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.690 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.951 17:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.212 nvme0n1 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.212 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.213 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.473 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.473 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.473 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.733 nvme0n1 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.733 17:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.400 nvme0n1 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.400 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.674 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.675 17:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.272 nvme0n1 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:14.272 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.273 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.843 nvme0n1 00:28:14.843 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.843 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.843 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.843 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.843 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.843 17:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.843 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.843 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.843 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.843 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.103 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.104 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.673 nvme0n1 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.673 17:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.242 nvme0n1 00:28:16.242 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.242 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.242 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.242 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.242 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.503 nvme0n1 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.503 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.765 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.766 nvme0n1 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.766 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.086 17:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.086 nvme0n1 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.086 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.087 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.348 nvme0n1 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.348 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.608 nvme0n1 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.608 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.868 nvme0n1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.868 17:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.128 nvme0n1 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.128 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 nvme0n1 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.649 nvme0n1 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.649 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.910 nvme0n1 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.910 17:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.171 nvme0n1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.171 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.432 nvme0n1 00:28:19.432 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.432 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.432 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.432 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.432 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.432 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.693 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.954 nvme0n1 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.954 17:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.954 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.955 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.955 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.955 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.955 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.215 nvme0n1 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.215 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.475 nvme0n1 00:28:20.475 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.475 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.475 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.475 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.475 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.475 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.734 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.735 17:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.994 nvme0n1 00:28:20.994 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.994 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.994 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.994 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.994 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.994 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.995 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.995 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.995 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.995 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.255 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.516 nvme0n1 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.516 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.778 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.778 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.778 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.778 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.778 17:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.039 nvme0n1 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.039 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.040 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.612 nvme0n1 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.612 17:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.181 nvme0n1 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:23.181 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.182 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.750 nvme0n1 00:28:23.750 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.750 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.750 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.750 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.750 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.751 17:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.694 nvme0n1 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.694 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.695 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.695 17:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.265 nvme0n1 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.265 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.835 nvme0n1 00:28:25.835 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.835 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.835 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.835 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.835 17:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.835 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.095 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.666 nvme0n1 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.666 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.667 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.928 nvme0n1 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.928 17:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.928 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.929 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.198 nvme0n1 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.198 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.460 nvme0n1 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.460 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.721 nvme0n1 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.721 nvme0n1 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.721 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.981 17:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.981 nvme0n1 00:28:27.981 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.981 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.981 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.981 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.981 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.981 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.241 nvme0n1 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.241 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.502 nvme0n1 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.502 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.762 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.762 nvme0n1 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:29.022 17:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.022 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 nvme0n1 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.282 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.283 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.543 nvme0n1 00:28:29.543 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.805 nvme0n1 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.805 17:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.065 nvme0n1 00:28:30.065 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.065 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.065 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.065 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.065 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.324 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.583 nvme0n1 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.583 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 nvme0n1 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.843 17:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.413 nvme0n1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.413 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.984 nvme0n1 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.984 17:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.245 nvme0n1 00:28:32.245 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.245 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.245 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.245 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.245 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.506 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.767 nvme0n1 00:28:32.767 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.767 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.767 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.767 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.767 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.767 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.027 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.028 17:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.288 nvme0n1 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.288 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU0ODliOTE3NTczNGFlM2U2MTRkOTRiYmQ0YzIzYzdEiBce: 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: ]] 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDc5Yzk5NjhjY2JlOTY1MzBjNDc2MzYwMmE3NmE0OWMyYjQwNjY0NjhlOTJjYmE2NmUxN2RlNTE4ZWJiYjliMavzjs8=: 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.548 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.549 17:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.119 nvme0n1 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.119 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.120 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.690 nvme0n1 00:28:34.690 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.690 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.690 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.690 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.690 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.950 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.950 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.950 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.950 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.951 17:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.519 nvme0n1 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTllODBmYWU3MjhmMTkxMDFkZmE0OTgyYzcyNzVjODQ0MjI1OWE1MzM4YTFjMTdjPJ2nmQ==: 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY2NGY1YzI0MmUyMjZhNTRkN2I5MjZhMWJiYzA0MTPdsReR: 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.519 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.520 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.520 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.520 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.520 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.520 17:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.457 nvme0n1 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Q0YjkxNjYyNDc5MDBlMDllZjYwYTI2ZjgyMTlmMDE1MWQ1YzZmMzhhOTIxODkzNTZmZDk1NWM0ZGVlMzVlM6Ix34E=: 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.457 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.458 17:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.028 nvme0n1 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.028 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.028 request: 00:28:37.028 { 00:28:37.028 "name": "nvme0", 00:28:37.028 "trtype": "tcp", 00:28:37.028 "traddr": "10.0.0.1", 00:28:37.028 "adrfam": "ipv4", 00:28:37.028 "trsvcid": "4420", 00:28:37.028 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.028 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.028 "prchk_reftag": false, 00:28:37.029 "prchk_guard": false, 00:28:37.029 "hdgst": false, 00:28:37.029 "ddgst": false, 00:28:37.029 "allow_unrecognized_csi": false, 00:28:37.029 "method": "bdev_nvme_attach_controller", 00:28:37.029 "req_id": 1 00:28:37.029 } 00:28:37.029 Got JSON-RPC error response 00:28:37.029 response: 00:28:37.029 { 00:28:37.029 "code": -5, 00:28:37.029 "message": "Input/output error" 00:28:37.029 } 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.029 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.289 request: 00:28:37.289 { 00:28:37.289 "name": "nvme0", 00:28:37.289 "trtype": "tcp", 00:28:37.289 "traddr": "10.0.0.1", 00:28:37.289 "adrfam": "ipv4", 00:28:37.289 "trsvcid": "4420", 00:28:37.289 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.289 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.289 "prchk_reftag": false, 00:28:37.289 "prchk_guard": false, 00:28:37.289 "hdgst": false, 00:28:37.289 "ddgst": false, 00:28:37.289 "dhchap_key": "key2", 00:28:37.289 "allow_unrecognized_csi": false, 00:28:37.289 "method": "bdev_nvme_attach_controller", 00:28:37.289 "req_id": 1 00:28:37.289 } 00:28:37.289 Got JSON-RPC error response 00:28:37.289 response: 00:28:37.289 { 00:28:37.289 "code": -5, 00:28:37.289 "message": "Input/output error" 00:28:37.289 } 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.289 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.290 request: 00:28:37.290 { 00:28:37.290 "name": "nvme0", 00:28:37.290 "trtype": "tcp", 00:28:37.290 "traddr": "10.0.0.1", 00:28:37.290 "adrfam": "ipv4", 00:28:37.290 "trsvcid": "4420", 00:28:37.290 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.290 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.290 "prchk_reftag": false, 00:28:37.290 "prchk_guard": false, 00:28:37.290 "hdgst": false, 00:28:37.290 "ddgst": false, 00:28:37.290 "dhchap_key": "key1", 00:28:37.290 "dhchap_ctrlr_key": "ckey2", 00:28:37.290 "allow_unrecognized_csi": false, 00:28:37.290 "method": "bdev_nvme_attach_controller", 00:28:37.290 "req_id": 1 00:28:37.290 } 00:28:37.290 Got JSON-RPC error response 00:28:37.290 response: 00:28:37.290 { 00:28:37.290 "code": -5, 00:28:37.290 "message": "Input/output error" 00:28:37.290 } 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.290 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.551 nvme0n1 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.551 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.811 request: 00:28:37.811 { 00:28:37.811 "name": "nvme0", 00:28:37.811 "dhchap_key": "key1", 00:28:37.811 "dhchap_ctrlr_key": "ckey2", 00:28:37.811 "method": "bdev_nvme_set_keys", 00:28:37.811 "req_id": 1 00:28:37.811 } 00:28:37.811 Got JSON-RPC error response 00:28:37.811 response: 00:28:37.811 { 00:28:37.811 "code": -13, 00:28:37.811 "message": "Permission denied" 00:28:37.811 } 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:37.811 17:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:38.746 17:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg0M2MwOTQ4M2ZjZDA2MjA4MDE3NTRhNzg4ODQ0YzRhY2EzYzZhZDgyMjAzNjY16F0c+Q==: 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: ]] 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E0NzM1YWQxMWI4NTM2OTAxYmM3ZTQxYmY1MDhlM2I3ZWQ3ZWI1MGIxMmYzNjA2OY7wmQ==: 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.128 17:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.128 nvme0n1 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJjM2ZhOTAzODM4NWJiNzcxNTNjOTY5NWEyNjk5NjcYvh6r: 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: ]] 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2FkNzdhMmUwNDgxZWMwODYxOWJjODExNzQzMTFkZDAEQJIN: 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.128 request: 00:28:40.128 { 00:28:40.128 "name": "nvme0", 00:28:40.128 "dhchap_key": "key2", 00:28:40.128 "dhchap_ctrlr_key": "ckey1", 00:28:40.128 "method": "bdev_nvme_set_keys", 00:28:40.128 "req_id": 1 00:28:40.128 } 00:28:40.128 Got JSON-RPC error response 00:28:40.128 response: 00:28:40.128 { 00:28:40.128 "code": -13, 00:28:40.128 "message": "Permission denied" 00:28:40.128 } 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.128 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:40.129 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.129 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.129 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.129 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:40.129 17:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:41.066 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.066 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:41.066 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.066 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.066 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.326 rmmod nvme_tcp 00:28:41.326 rmmod nvme_fabrics 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2118094 ']' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2118094 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2118094 ']' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2118094 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2118094 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2118094' 00:28:41.326 killing process with pid 2118094 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2118094 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2118094 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.326 17:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:43.869 17:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:47.250 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:47.250 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:47.823 17:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zO0 /tmp/spdk.key-null.hSt /tmp/spdk.key-sha256.Fri /tmp/spdk.key-sha384.Y4F /tmp/spdk.key-sha512.CWj /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:47.823 17:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.127 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:51.127 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:51.127 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:51.393 00:28:51.393 real 1m0.963s 00:28:51.393 user 0m54.502s 00:28:51.393 sys 0m16.354s 00:28:51.393 17:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.393 17:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.393 ************************************ 00:28:51.393 END TEST nvmf_auth_host 00:28:51.393 ************************************ 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.657 ************************************ 00:28:51.657 START TEST nvmf_digest 00:28:51.657 ************************************ 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:51.657 * Looking for test storage... 00:28:51.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:51.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.657 --rc genhtml_branch_coverage=1 00:28:51.657 --rc genhtml_function_coverage=1 00:28:51.657 --rc genhtml_legend=1 00:28:51.657 --rc geninfo_all_blocks=1 00:28:51.657 --rc geninfo_unexecuted_blocks=1 00:28:51.657 00:28:51.657 ' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:51.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.657 --rc genhtml_branch_coverage=1 00:28:51.657 --rc genhtml_function_coverage=1 00:28:51.657 --rc genhtml_legend=1 00:28:51.657 --rc geninfo_all_blocks=1 00:28:51.657 --rc geninfo_unexecuted_blocks=1 00:28:51.657 00:28:51.657 ' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:51.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.657 --rc genhtml_branch_coverage=1 00:28:51.657 --rc genhtml_function_coverage=1 00:28:51.657 --rc genhtml_legend=1 00:28:51.657 --rc geninfo_all_blocks=1 00:28:51.657 --rc geninfo_unexecuted_blocks=1 00:28:51.657 00:28:51.657 ' 00:28:51.657 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:51.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.658 --rc genhtml_branch_coverage=1 00:28:51.658 --rc genhtml_function_coverage=1 00:28:51.658 --rc genhtml_legend=1 00:28:51.658 --rc geninfo_all_blocks=1 00:28:51.658 --rc geninfo_unexecuted_blocks=1 00:28:51.658 00:28:51.658 ' 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.658 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.919 17:11:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:00.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:00.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.067 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:00.068 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:00.068 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:29:00.068 00:29:00.068 --- 10.0.0.2 ping statistics --- 00:29:00.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.068 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:29:00.068 00:29:00.068 --- 10.0.0.1 ping statistics --- 00:29:00.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.068 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.068 ************************************ 00:29:00.068 START TEST nvmf_digest_clean 00:29:00.068 ************************************ 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2135071 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2135071 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2135071 ']' 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.068 17:11:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.068 [2024-11-20 17:11:51.533394] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:00.068 [2024-11-20 17:11:51.533458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.068 [2024-11-20 17:11:51.632446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.068 [2024-11-20 17:11:51.684671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.068 [2024-11-20 17:11:51.684723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.068 [2024-11-20 17:11:51.684731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.068 [2024-11-20 17:11:51.684739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.068 [2024-11-20 17:11:51.684745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.068 [2024-11-20 17:11:51.685525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 null0 00:29:00.330 [2024-11-20 17:11:52.493446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.590 [2024-11-20 17:11:52.517777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.590 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2135411 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2135411 /var/tmp/bperf.sock 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2135411 ']' 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.591 17:11:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.591 [2024-11-20 17:11:52.577295] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:00.591 [2024-11-20 17:11:52.577360] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135411 ] 00:29:00.591 [2024-11-20 17:11:52.671987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.591 [2024-11-20 17:11:52.724432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.532 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.792 nvme0n1 00:29:01.793 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:01.793 17:11:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.053 Running I/O for 2 seconds... 00:29:03.939 18813.00 IOPS, 73.49 MiB/s [2024-11-20T16:11:56.115Z] 18879.50 IOPS, 73.75 MiB/s 00:29:03.939 Latency(us) 00:29:03.939 [2024-11-20T16:11:56.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.939 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:03.939 nvme0n1 : 2.04 18531.61 72.39 0.00 0.00 6763.97 3194.88 45656.75 00:29:03.939 [2024-11-20T16:11:56.115Z] =================================================================================================================== 00:29:03.939 [2024-11-20T16:11:56.115Z] Total : 18531.61 72.39 0.00 0.00 6763.97 3194.88 45656.75 00:29:03.939 { 00:29:03.939 "results": [ 00:29:03.939 { 00:29:03.939 "job": "nvme0n1", 00:29:03.939 "core_mask": "0x2", 00:29:03.939 "workload": "randread", 00:29:03.939 "status": "finished", 00:29:03.939 "queue_depth": 128, 00:29:03.939 "io_size": 4096, 00:29:03.939 "runtime": 2.044453, 00:29:03.939 "iops": 18531.607231861042, 00:29:03.939 "mibps": 72.3890907494572, 00:29:03.939 "io_failed": 0, 00:29:03.939 "io_timeout": 0, 00:29:03.939 "avg_latency_us": 6763.966453224941, 00:29:03.939 "min_latency_us": 3194.88, 00:29:03.939 "max_latency_us": 45656.746666666666 00:29:03.939 } 00:29:03.939 ], 00:29:03.939 "core_count": 1 00:29:03.939 } 00:29:03.939 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.199 | select(.opcode=="crc32c") 00:29:04.199 | "\(.module_name) \(.executed)"' 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2135411 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2135411 ']' 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2135411 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2135411 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2135411' 00:29:04.199 killing process with pid 2135411 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2135411 00:29:04.199 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.199 00:29:04.199 Latency(us) 00:29:04.199 [2024-11-20T16:11:56.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.199 [2024-11-20T16:11:56.375Z] =================================================================================================================== 00:29:04.199 [2024-11-20T16:11:56.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.199 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2135411 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2136096 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2136096 /var/tmp/bperf.sock 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2136096 ']' 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.461 17:11:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.461 [2024-11-20 17:11:56.527518] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:04.461 [2024-11-20 17:11:56.527576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136096 ] 00:29:04.461 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.461 Zero copy mechanism will not be used. 00:29:04.461 [2024-11-20 17:11:56.615727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.721 [2024-11-20 17:11:56.651624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.291 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.291 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:05.291 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:05.291 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:05.291 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.552 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.552 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.812 nvme0n1 00:29:05.813 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:05.813 17:11:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.813 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.813 Zero copy mechanism will not be used. 00:29:05.813 Running I/O for 2 seconds... 00:29:08.134 3721.00 IOPS, 465.12 MiB/s [2024-11-20T16:12:00.310Z] 4150.50 IOPS, 518.81 MiB/s 00:29:08.134 Latency(us) 00:29:08.134 [2024-11-20T16:12:00.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:08.134 nvme0n1 : 2.00 4150.33 518.79 0.00 0.00 3852.26 764.59 6908.59 00:29:08.134 [2024-11-20T16:12:00.310Z] =================================================================================================================== 00:29:08.134 [2024-11-20T16:12:00.310Z] Total : 4150.33 518.79 0.00 0.00 3852.26 764.59 6908.59 00:29:08.134 { 00:29:08.134 "results": [ 00:29:08.134 { 00:29:08.134 "job": "nvme0n1", 00:29:08.134 "core_mask": "0x2", 00:29:08.134 "workload": "randread", 00:29:08.134 "status": "finished", 00:29:08.134 "queue_depth": 16, 00:29:08.134 "io_size": 131072, 00:29:08.134 "runtime": 2.003935, 00:29:08.134 "iops": 4150.334217427212, 00:29:08.134 "mibps": 518.7917771784015, 00:29:08.134 "io_failed": 0, 00:29:08.134 "io_timeout": 0, 00:29:08.134 "avg_latency_us": 3852.2643196665463, 00:29:08.134 "min_latency_us": 764.5866666666667, 00:29:08.134 "max_latency_us": 6908.586666666667 00:29:08.134 } 00:29:08.134 ], 00:29:08.134 "core_count": 1 00:29:08.134 } 00:29:08.134 17:11:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:08.134 17:11:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:08.134 17:11:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:08.134 17:11:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:08.134 | select(.opcode=="crc32c") 00:29:08.134 | "\(.module_name) \(.executed)"' 00:29:08.134 17:11:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:08.134 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:08.134 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:08.134 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:08.134 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2136096 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2136096 ']' 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2136096 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2136096 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2136096' 00:29:08.135 killing process with pid 2136096 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2136096 00:29:08.135 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.135 00:29:08.135 Latency(us) 00:29:08.135 [2024-11-20T16:12:00.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.135 [2024-11-20T16:12:00.311Z] =================================================================================================================== 00:29:08.135 [2024-11-20T16:12:00.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.135 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2136096 00:29:08.394 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:08.394 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:08.394 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.394 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2136804 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2136804 /var/tmp/bperf.sock 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2136804 ']' 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.395 17:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.395 [2024-11-20 17:12:00.391521] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:08.395 [2024-11-20 17:12:00.391579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136804 ] 00:29:08.395 [2024-11-20 17:12:00.475035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.395 [2024-11-20 17:12:00.504494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.334 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.334 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:09.334 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:09.334 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:09.334 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.334 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.335 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.594 nvme0n1 00:29:09.854 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:09.854 17:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.854 Running I/O for 2 seconds... 00:29:11.733 30333.00 IOPS, 118.49 MiB/s [2024-11-20T16:12:03.909Z] 30002.50 IOPS, 117.20 MiB/s 00:29:11.733 Latency(us) 00:29:11.733 [2024-11-20T16:12:03.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.733 nvme0n1 : 2.01 30001.88 117.19 0.00 0.00 4259.20 2088.96 9666.56 00:29:11.733 [2024-11-20T16:12:03.909Z] =================================================================================================================== 00:29:11.733 [2024-11-20T16:12:03.909Z] Total : 30001.88 117.19 0.00 0.00 4259.20 2088.96 9666.56 00:29:11.733 { 00:29:11.733 "results": [ 00:29:11.733 { 00:29:11.733 "job": "nvme0n1", 00:29:11.733 "core_mask": "0x2", 00:29:11.733 "workload": "randwrite", 00:29:11.733 "status": "finished", 00:29:11.733 "queue_depth": 128, 00:29:11.733 "io_size": 4096, 00:29:11.733 "runtime": 2.005374, 00:29:11.733 "iops": 30001.884935179172, 00:29:11.733 "mibps": 117.19486302804364, 00:29:11.733 "io_failed": 0, 00:29:11.733 "io_timeout": 0, 00:29:11.733 "avg_latency_us": 4259.195968420178, 00:29:11.733 "min_latency_us": 2088.96, 00:29:11.733 "max_latency_us": 9666.56 00:29:11.733 } 00:29:11.733 ], 00:29:11.733 "core_count": 1 00:29:11.733 } 00:29:11.733 17:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:11.733 17:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:11.733 17:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:11.733 17:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:11.733 | select(.opcode=="crc32c") 00:29:11.733 | "\(.module_name) \(.executed)"' 00:29:11.733 17:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2136804 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2136804 ']' 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2136804 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2136804 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2136804' 00:29:11.993 killing process with pid 2136804 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2136804 00:29:11.993 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.993 00:29:11.993 Latency(us) 00:29:11.993 [2024-11-20T16:12:04.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.993 [2024-11-20T16:12:04.169Z] =================================================================================================================== 00:29:11.993 [2024-11-20T16:12:04.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.993 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2136804 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2137705 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2137705 /var/tmp/bperf.sock 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2137705 ']' 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.254 17:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:12.254 [2024-11-20 17:12:04.288086] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:12.254 [2024-11-20 17:12:04.288145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137705 ] 00:29:12.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.254 Zero copy mechanism will not be used. 00:29:12.254 [2024-11-20 17:12:04.370666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.254 [2024-11-20 17:12:04.400151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.194 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.766 nvme0n1 00:29:13.766 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:13.766 17:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.766 Zero copy mechanism will not be used. 00:29:13.766 Running I/O for 2 seconds... 00:29:15.654 5454.00 IOPS, 681.75 MiB/s [2024-11-20T16:12:07.830Z] 6553.50 IOPS, 819.19 MiB/s 00:29:15.654 Latency(us) 00:29:15.654 [2024-11-20T16:12:07.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.654 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:15.654 nvme0n1 : 2.00 6552.49 819.06 0.00 0.00 2437.12 983.04 6744.75 00:29:15.654 [2024-11-20T16:12:07.830Z] =================================================================================================================== 00:29:15.654 [2024-11-20T16:12:07.830Z] Total : 6552.49 819.06 0.00 0.00 2437.12 983.04 6744.75 00:29:15.654 { 00:29:15.654 "results": [ 00:29:15.654 { 00:29:15.654 "job": "nvme0n1", 00:29:15.654 "core_mask": "0x2", 00:29:15.654 "workload": "randwrite", 00:29:15.654 "status": "finished", 00:29:15.654 "queue_depth": 16, 00:29:15.654 "io_size": 131072, 00:29:15.654 "runtime": 2.00336, 00:29:15.654 "iops": 6552.491813752895, 00:29:15.654 "mibps": 819.0614767191119, 00:29:15.654 "io_failed": 0, 00:29:15.654 "io_timeout": 0, 00:29:15.654 "avg_latency_us": 2437.123380310302, 00:29:15.654 "min_latency_us": 983.04, 00:29:15.654 "max_latency_us": 6744.746666666667 00:29:15.654 } 00:29:15.654 ], 00:29:15.654 "core_count": 1 00:29:15.654 } 00:29:15.915 17:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:15.915 17:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:15.915 17:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.915 17:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.915 | select(.opcode=="crc32c") 00:29:15.915 | "\(.module_name) \(.executed)"' 00:29:15.915 17:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2137705 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2137705 ']' 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2137705 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2137705 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2137705' 00:29:15.916 killing process with pid 2137705 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2137705 00:29:15.916 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.916 00:29:15.916 Latency(us) 00:29:15.916 [2024-11-20T16:12:08.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.916 [2024-11-20T16:12:08.092Z] =================================================================================================================== 00:29:15.916 [2024-11-20T16:12:08.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.916 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2137705 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2135071 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2135071 ']' 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2135071 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2135071 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2135071' 00:29:16.177 killing process with pid 2135071 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2135071 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2135071 00:29:16.177 00:29:16.177 real 0m16.881s 00:29:16.177 user 0m32.841s 00:29:16.177 sys 0m3.983s 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.177 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:16.177 ************************************ 00:29:16.177 END TEST nvmf_digest_clean 00:29:16.177 ************************************ 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:16.440 ************************************ 00:29:16.440 START TEST nvmf_digest_error 00:29:16.440 ************************************ 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2138600 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2138600 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2138600 ']' 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.440 17:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.440 [2024-11-20 17:12:08.491270] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:16.440 [2024-11-20 17:12:08.491328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.440 [2024-11-20 17:12:08.582842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.700 [2024-11-20 17:12:08.616534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.700 [2024-11-20 17:12:08.616564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.700 [2024-11-20 17:12:08.616570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.700 [2024-11-20 17:12:08.616575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.700 [2024-11-20 17:12:08.616579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.700 [2024-11-20 17:12:08.617066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.269 [2024-11-20 17:12:09.323046] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:17.269 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.270 null0 00:29:17.270 [2024-11-20 17:12:09.402177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.270 [2024-11-20 17:12:09.426436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2138958 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2138958 /var/tmp/bperf.sock 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2138958 ']' 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.270 17:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.530 [2024-11-20 17:12:09.483858] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:17.530 [2024-11-20 17:12:09.483908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138958 ] 00:29:17.530 [2024-11-20 17:12:09.566152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.530 [2024-11-20 17:12:09.595985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.469 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.729 nvme0n1 00:29:18.729 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:18.729 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.730 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.730 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.730 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:18.730 17:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.730 Running I/O for 2 seconds... 00:29:18.990 [2024-11-20 17:12:10.911339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.911372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.911381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.919125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.919145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.919153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.931571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.931591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.941885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.941904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.941911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.952403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.952422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.952429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.959844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.959863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.959870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.969113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.969131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.969139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.978347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.978366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.978372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.987449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.987467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.987474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:10.997996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:10.998014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:10.998021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.006711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.006729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.006736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.015593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.015612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.015619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.024471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.024491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.024498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.033775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.033792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.033798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.041715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.041732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.041738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.051708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.051725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.051732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.059752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.059770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.068205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.068223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.068229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.078008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.078026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.078032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.088006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.088024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.088030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.096865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.096883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.096889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.104809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.104827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.104834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.114445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.114463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.114469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.123233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.990 [2024-11-20 17:12:11.123251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.990 [2024-11-20 17:12:11.123257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.990 [2024-11-20 17:12:11.132366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.991 [2024-11-20 17:12:11.132384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.991 [2024-11-20 17:12:11.132391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.991 [2024-11-20 17:12:11.141628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.991 [2024-11-20 17:12:11.141650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.991 [2024-11-20 17:12:11.141656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.991 [2024-11-20 17:12:11.149632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.991 [2024-11-20 17:12:11.149650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.991 [2024-11-20 17:12:11.149657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.991 [2024-11-20 17:12:11.160029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:18.991 [2024-11-20 17:12:11.160047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.991 [2024-11-20 17:12:11.160053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.167823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.167841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.167848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.177174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.177192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.177199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.186153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.186175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.186182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.195277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.195295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.195302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.204290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.204308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.204315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.213310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.213328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.213334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.221966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.221984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.221990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.230634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.230652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.251 [2024-11-20 17:12:11.230658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.251 [2024-11-20 17:12:11.240007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.251 [2024-11-20 17:12:11.240025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.240032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.249471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.249488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.249495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.258049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.258067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.258074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.267446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.267464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.267471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.275907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.275926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.275932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.284448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.284465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.284472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.293770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.293788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.293798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.304093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.304112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.304118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.311887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.311905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.311911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.322035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.322053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.322059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.329814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.329833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.329839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.340682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.340700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.340706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.350211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.350229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.350236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.359378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.359396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.359403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.368273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.368291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.368298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.376920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.376938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.385778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.385796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.385803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.394731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.394749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.403067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.403085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.403092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.412394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.412411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.412418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.252 [2024-11-20 17:12:11.421484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.252 [2024-11-20 17:12:11.421502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.252 [2024-11-20 17:12:11.421509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.430360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.430379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.430386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.439170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.439188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.439194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.447633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.447651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.447662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.457708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.457726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.457733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.466697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.466714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.466721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.476407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.476425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.476432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.485311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.485329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.485335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.493909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.493928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.493934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.503084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.503101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.503107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.512007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.512024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.512031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.520591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.520609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.520615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.529890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.529912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.529918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.538788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.538806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.538813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.547539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.547556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.547562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.556261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.556278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.556285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.565057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.565075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.565081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.573722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.573739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.573746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.583268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.583286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.583292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.514 [2024-11-20 17:12:11.591972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.514 [2024-11-20 17:12:11.591990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.514 [2024-11-20 17:12:11.591997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.601994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.602011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.602018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.609588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.609606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.609612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.620145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.620170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.620176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.627341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.627366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.637473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.637491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.637497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.646789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.646807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.646814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.655914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.655932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.655939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.663656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.663674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.663680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.672962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.672981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.672987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.515 [2024-11-20 17:12:11.682885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.515 [2024-11-20 17:12:11.682904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.515 [2024-11-20 17:12:11.682914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.690716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.690734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.690741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.700467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.700485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.700491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.710508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.710525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.710531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.718968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.718985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.718992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.726909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.726927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.737727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.737747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.737754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.746851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.746868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.746875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.754976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.754993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.755000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.763807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.763828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.763834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.773552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.776 [2024-11-20 17:12:11.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.776 [2024-11-20 17:12:11.773576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.776 [2024-11-20 17:12:11.782049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.782066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.782072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.792044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.792062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.792068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.803007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.803031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.811654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.811671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.811677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.820882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.820899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.820906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.829272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.829289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.829296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.838531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.838548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.838554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.846692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.846708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.846715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.856060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.856083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.864696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.864714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.864720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.873775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.873793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.873799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.882678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.882695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.882702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.891900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.891917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.891924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 27867.00 IOPS, 108.86 MiB/s [2024-11-20T16:12:11.953Z] [2024-11-20 17:12:11.903028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.903045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.903051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.910674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.910692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.910698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.919783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.919804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.919810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.929050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.929066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.929073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.938338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.938354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.938361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.777 [2024-11-20 17:12:11.946874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:19.777 [2024-11-20 17:12:11.946891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.777 [2024-11-20 17:12:11.946898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.038 [2024-11-20 17:12:11.955534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.038 [2024-11-20 17:12:11.955552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.038 [2024-11-20 17:12:11.955558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.038 [2024-11-20 17:12:11.964862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.038 [2024-11-20 17:12:11.964879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:11.964885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:11.975005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:11.975021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:11.975027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:11.982601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:11.982618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:11.982624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:11.992644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:11.992660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:11.992667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.001268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.001285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.001291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.010305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.010321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.010327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.019310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.019327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.019334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.028316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.028333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.028339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.037103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.037120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.037126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.045456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.045473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.045480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.055110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.055127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.055133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.064181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.064198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.064205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.072395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.072412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.072421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.081306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.081323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.081329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.089975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.089991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.089997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.099100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.099117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.099123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.108096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.108113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.108119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.116805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.116822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.116828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.125074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.125091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.125097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.135207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.135224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.135230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.144418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.144434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.144440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.153627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.153649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.153656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.162146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.162168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.162175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.171055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.171072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.171078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.180827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.180842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.180848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.190605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.190622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.190628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.039 [2024-11-20 17:12:12.199545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.039 [2024-11-20 17:12:12.199562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.039 [2024-11-20 17:12:12.199568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.040 [2024-11-20 17:12:12.208224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.040 [2024-11-20 17:12:12.208240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.040 [2024-11-20 17:12:12.208247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.216701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.216719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.216726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.225584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.225600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.225610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.235242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.235258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.235265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.243898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.243915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.243921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.252066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.252082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.252089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.261423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.261440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.261446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.272233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.272249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.272255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.281143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.281164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.281171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.288699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.288715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.288721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.298906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.298923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.298929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.307963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.307983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.307989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.315711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.315728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.315734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.324852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.324868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.334666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.334683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.334689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.343379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.343396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.343402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.354128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.354145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.354151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.362722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.301 [2024-11-20 17:12:12.362738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.301 [2024-11-20 17:12:12.362744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.301 [2024-11-20 17:12:12.371379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.371395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.371401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.380965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.380987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.389155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.389176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.389182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.397579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.397596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.397602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.407241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.407258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.407264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.416060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.416076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.416083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.424570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.424587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.424593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.433940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.433956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.433963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.442106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.442123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.442130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.451373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.451391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.460765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.460782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.460792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.302 [2024-11-20 17:12:12.470759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.302 [2024-11-20 17:12:12.470776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.302 [2024-11-20 17:12:12.470782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.563 [2024-11-20 17:12:12.478071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.563 [2024-11-20 17:12:12.478088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.563 [2024-11-20 17:12:12.478094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.563 [2024-11-20 17:12:12.487730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.563 [2024-11-20 17:12:12.487747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.563 [2024-11-20 17:12:12.487754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.563 [2024-11-20 17:12:12.499369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.563 [2024-11-20 17:12:12.499387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.499393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.509921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.509938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.509944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.519795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.519812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.519818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.528816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.528833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.528839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.539112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.539135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.548895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.548916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.559096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.559113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.559119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.566877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.566893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.566899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.576338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.576355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.576361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.585548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.585565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.585572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.594201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.594218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.594224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.602860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.602877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.602883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.612529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.612545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.612551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.620792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.620808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.620814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.629104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.629121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.629127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.638998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.639015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.639021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.650459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.650476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.650483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.659407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.659424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.659431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.668742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.668759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.668765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.677314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.677330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.677337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.685359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.685382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.693885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.693902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.693908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.703024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.703045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.703051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.713221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.713238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.713244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.721789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.721807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.721813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.564 [2024-11-20 17:12:12.729653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.564 [2024-11-20 17:12:12.729669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.564 [2024-11-20 17:12:12.729676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.826 [2024-11-20 17:12:12.739739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.826 [2024-11-20 17:12:12.739757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.826 [2024-11-20 17:12:12.739763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.826 [2024-11-20 17:12:12.748603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.826 [2024-11-20 17:12:12.748621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.826 [2024-11-20 17:12:12.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.826 [2024-11-20 17:12:12.757714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.826 [2024-11-20 17:12:12.757731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.826 [2024-11-20 17:12:12.757738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.826 [2024-11-20 17:12:12.766949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.826 [2024-11-20 17:12:12.766967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.826 [2024-11-20 17:12:12.766973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.775895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.775913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.775919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.784021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.784039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.784045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.794763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.794782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.794788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.805869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.805887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.805893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.813683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.813700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.813707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.822550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.822568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.822574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.831882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.831899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.840286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.840303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.840309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.848466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.848483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.848489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.857850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.857868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.857877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.866949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.866966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.866972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.875633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.875650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.875656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.884331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.884348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.884355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 [2024-11-20 17:12:12.893760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013040) 00:29:20.827 [2024-11-20 17:12:12.893777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.827 [2024-11-20 17:12:12.893783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.827 27964.00 IOPS, 109.23 MiB/s 00:29:20.827 Latency(us) 00:29:20.827 [2024-11-20T16:12:13.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.827 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:20.827 nvme0n1 : 2.00 27978.13 109.29 0.00 0.00 4570.47 2348.37 15728.64 00:29:20.827 [2024-11-20T16:12:13.003Z] =================================================================================================================== 00:29:20.827 [2024-11-20T16:12:13.003Z] Total : 27978.13 109.29 0.00 0.00 4570.47 2348.37 15728.64 00:29:20.827 { 00:29:20.827 "results": [ 00:29:20.827 { 00:29:20.827 "job": "nvme0n1", 00:29:20.827 "core_mask": "0x2", 00:29:20.827 "workload": "randread", 00:29:20.827 "status": "finished", 00:29:20.827 "queue_depth": 128, 00:29:20.827 "io_size": 4096, 00:29:20.827 "runtime": 2.003565, 00:29:20.827 "iops": 27978.128985084088, 00:29:20.827 "mibps": 109.28956634798472, 00:29:20.827 "io_failed": 0, 00:29:20.827 "io_timeout": 0, 00:29:20.827 "avg_latency_us": 4570.471966129109, 00:29:20.827 "min_latency_us": 2348.3733333333334, 00:29:20.827 "max_latency_us": 15728.64 00:29:20.827 } 00:29:20.827 ], 00:29:20.827 "core_count": 1 00:29:20.827 } 00:29:20.827 17:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:20.827 17:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:20.827 17:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:20.827 | .driver_specific 00:29:20.827 | .nvme_error 00:29:20.827 | .status_code 00:29:20.827 | .command_transient_transport_error' 00:29:20.827 17:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2138958 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2138958 ']' 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2138958 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138958 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138958' 00:29:21.089 killing process with pid 2138958 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2138958 00:29:21.089 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.089 00:29:21.089 Latency(us) 00:29:21.089 [2024-11-20T16:12:13.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.089 [2024-11-20T16:12:13.265Z] =================================================================================================================== 00:29:21.089 [2024-11-20T16:12:13.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.089 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2138958 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2140007 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2140007 /var/tmp/bperf.sock 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2140007 ']' 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.349 17:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.349 [2024-11-20 17:12:13.321762] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:21.349 [2024-11-20 17:12:13.321819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140007 ] 00:29:21.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.349 Zero copy mechanism will not be used. 00:29:21.349 [2024-11-20 17:12:13.404826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.350 [2024-11-20 17:12:13.434068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.291 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.553 nvme0n1 00:29:22.553 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:22.553 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.553 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.553 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.553 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.553 17:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.553 Zero copy mechanism will not be used. 00:29:22.553 Running I/O for 2 seconds... 00:29:22.553 [2024-11-20 17:12:14.673752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.553 [2024-11-20 17:12:14.673788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-11-20 17:12:14.673798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.553 [2024-11-20 17:12:14.683867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.553 [2024-11-20 17:12:14.683891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-11-20 17:12:14.683899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.553 [2024-11-20 17:12:14.694924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.553 [2024-11-20 17:12:14.694944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-11-20 17:12:14.694952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.553 [2024-11-20 17:12:14.704581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.553 [2024-11-20 17:12:14.704608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-11-20 17:12:14.704614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.553 [2024-11-20 17:12:14.715918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.553 [2024-11-20 17:12:14.715937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-11-20 17:12:14.715944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.553 [2024-11-20 17:12:14.721727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.553 [2024-11-20 17:12:14.721746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-11-20 17:12:14.721753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.728839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.728858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.815 [2024-11-20 17:12:14.728865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.737911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.737929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.815 [2024-11-20 17:12:14.737936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.750183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.750202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.815 [2024-11-20 17:12:14.750209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.762580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.762599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.815 [2024-11-20 17:12:14.762605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.773993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.774012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.815 [2024-11-20 17:12:14.774018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.785761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.785779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.815 [2024-11-20 17:12:14.785785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.815 [2024-11-20 17:12:14.798007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.815 [2024-11-20 17:12:14.798026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.810220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.810238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.810245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.822656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.822674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.822680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.834801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.834819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.834826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.847153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.847176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.847183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.860254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.860272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.860278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.871052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.871069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.871076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.882018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.882036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.882042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.892650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.892671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.892677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.904609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.904628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.904634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.916679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.916697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.916703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.928859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.928878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.928884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.941565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.941584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.941591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.953016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.953034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.953040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.965581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.965599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.965605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.816 [2024-11-20 17:12:14.976506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:22.816 [2024-11-20 17:12:14.976524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.816 [2024-11-20 17:12:14.976530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:14.988510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:14.988528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:14.988535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:14.999841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:14.999860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:14.999867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.010522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.010541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.010547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.020784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.020802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.020809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.026228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.026247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.026253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.033300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.033318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.033325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.041150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.041174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.041181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.048435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.048454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.048460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.053954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.053973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.053979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.064576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.079 [2024-11-20 17:12:15.064604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.079 [2024-11-20 17:12:15.071418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.079 [2024-11-20 17:12:15.071437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.071443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.077039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.077057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.077064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.081931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.081949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.081955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.086584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.086603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.086609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.092996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.093014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.093020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.098945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.098963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.098970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.105823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.105841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.105847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.111673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.111691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.111697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.116259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.116280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.116286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.118714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.118731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.118737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.123758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.123777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.123783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.128949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.128967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.128973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.133858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.133876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.133882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.140489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.140507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.140514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.148408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.148426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.148432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.155563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.155581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.155587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.160335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.160354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.160360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.165429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.165447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.165453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.173257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.173275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.173281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.178167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.178185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.178192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.183186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.183204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.183210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.190253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.080 [2024-11-20 17:12:15.190272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.080 [2024-11-20 17:12:15.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.080 [2024-11-20 17:12:15.198355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.198373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.198380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.081 [2024-11-20 17:12:15.209144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.209167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.209173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.081 [2024-11-20 17:12:15.218925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.218944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.218950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.081 [2024-11-20 17:12:15.224557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.224575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.224585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.081 [2024-11-20 17:12:15.232079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.232097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.232103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.081 [2024-11-20 17:12:15.236813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.236831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.236838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.081 [2024-11-20 17:12:15.248271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.081 [2024-11-20 17:12:15.248289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.081 [2024-11-20 17:12:15.248296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.343 [2024-11-20 17:12:15.255227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.343 [2024-11-20 17:12:15.255246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.343 [2024-11-20 17:12:15.255252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.343 [2024-11-20 17:12:15.263369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.343 [2024-11-20 17:12:15.263388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.343 [2024-11-20 17:12:15.263394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.343 [2024-11-20 17:12:15.271520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.343 [2024-11-20 17:12:15.271538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.343 [2024-11-20 17:12:15.271544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.343 [2024-11-20 17:12:15.281169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.343 [2024-11-20 17:12:15.281187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.343 [2024-11-20 17:12:15.281193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.343 [2024-11-20 17:12:15.288380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.343 [2024-11-20 17:12:15.288397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.343 [2024-11-20 17:12:15.288404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.343 [2024-11-20 17:12:15.297053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.343 [2024-11-20 17:12:15.297071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.297078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.306994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.307013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.307019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.314956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.314974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.314980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.324524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.324542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.324548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.334231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.334249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.334256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.342934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.342952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.342958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.348469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.348487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.348494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.353977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.353994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.354001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.360072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.360091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.360100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.365897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.365915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.365921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.374716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.374734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.374740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.381444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.381462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.381468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.391504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.391521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.391528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.401962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.401980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.401987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.412932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.412949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.412955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.420417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.420435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.420441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.426536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.426554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.426560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.436023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.436045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.436051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.443911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.443929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.443935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.449946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.449964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.449971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.454884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.454903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.454909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.461392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.461410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.461416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.465918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.465936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.465942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.470588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.470606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.470612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.475147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.475170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.475176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.481017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.481035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.481041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.489761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.489779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.344 [2024-11-20 17:12:15.489785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.344 [2024-11-20 17:12:15.494706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.344 [2024-11-20 17:12:15.494723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.345 [2024-11-20 17:12:15.494729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.345 [2024-11-20 17:12:15.499310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.345 [2024-11-20 17:12:15.499328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.345 [2024-11-20 17:12:15.499334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.345 [2024-11-20 17:12:15.504231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.345 [2024-11-20 17:12:15.504249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.345 [2024-11-20 17:12:15.504255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.345 [2024-11-20 17:12:15.510682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.345 [2024-11-20 17:12:15.510700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.345 [2024-11-20 17:12:15.510706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.345 [2024-11-20 17:12:15.515497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.345 [2024-11-20 17:12:15.515515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.345 [2024-11-20 17:12:15.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.608 [2024-11-20 17:12:15.518608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.518632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.525741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.525759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.525765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.534434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.534452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.534462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.540414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.540432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.540439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.545052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.545070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.545076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.552862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.552881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.552887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.559468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.559486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.559492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.563994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.564012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.564018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.568580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.568598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.568604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.572932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.572950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.572956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.578636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.578654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.578660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.583062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.583084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.583090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.587344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.587362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.595172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.595190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.595197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.600500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.600518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.600524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.604974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.604998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.609369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.609387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.618555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.618573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.618579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.622938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.622956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.622963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.627246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.627263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.627269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.633680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.633698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.609 [2024-11-20 17:12:15.633704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.609 [2024-11-20 17:12:15.638260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.609 [2024-11-20 17:12:15.638278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.638285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.642898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.642916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.642922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.647333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.647350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.647356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.654054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.654072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.654078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.658641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.658659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.658665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.663009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.663027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.610 4038.00 IOPS, 504.75 MiB/s [2024-11-20T16:12:15.786Z] [2024-11-20 17:12:15.669868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.669886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.669893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.678439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.678462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.678469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.683227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.683244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.683250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.690256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.690274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.690280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.694824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.694842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.694848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.703177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.703195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.703201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.710532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.710550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.710557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.718127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.718145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.718152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.722950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.722968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.722975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.730091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.730109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.730116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.738031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.738049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.738056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.743715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.743739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.751662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.751681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.751688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.759082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.759101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.759107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.768173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.768192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.768198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.610 [2024-11-20 17:12:15.775041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.610 [2024-11-20 17:12:15.775059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.610 [2024-11-20 17:12:15.775065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.873 [2024-11-20 17:12:15.782569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.873 [2024-11-20 17:12:15.782588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.873 [2024-11-20 17:12:15.782594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.873 [2024-11-20 17:12:15.788650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.873 [2024-11-20 17:12:15.788668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.873 [2024-11-20 17:12:15.788674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.873 [2024-11-20 17:12:15.794317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.873 [2024-11-20 17:12:15.794336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.873 [2024-11-20 17:12:15.794345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.804049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.804067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.804073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.814668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.814686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.814693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.821189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.821207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.821213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.829368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.829393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.839363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.839381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.844738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.844756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.844762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.850936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.850954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.850960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.859013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.859031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.859037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.865383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.865405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.865411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.874656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.874674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.874681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.883599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.883617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.883623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.890075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.890093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.890099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.896119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.896137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.896143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.907410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.907428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.907434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.914803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.914821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.914827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.926325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.926343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.926350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.937227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.937245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.937251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.948627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.948646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.948652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.958324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.958343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.958349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.966885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.966903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.966909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.972373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.972392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.874 [2024-11-20 17:12:15.972398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.874 [2024-11-20 17:12:15.980200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.874 [2024-11-20 17:12:15.980218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:15.980224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:15.990440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:15.990457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:15.990464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:15.998999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:15.999017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:15.999023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.005032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.005050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.005056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.009425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.009447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.009453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.013983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.014007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.018620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.018638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.018644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.023206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.023223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.023229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.028005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.028023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.028029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.032329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.032347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.032353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.038290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.038308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.038314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.875 [2024-11-20 17:12:16.043466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:23.875 [2024-11-20 17:12:16.043485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.875 [2024-11-20 17:12:16.043491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.137 [2024-11-20 17:12:16.052865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.137 [2024-11-20 17:12:16.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.137 [2024-11-20 17:12:16.052889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.137 [2024-11-20 17:12:16.060755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.137 [2024-11-20 17:12:16.060772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.137 [2024-11-20 17:12:16.060779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.137 [2024-11-20 17:12:16.066085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.137 [2024-11-20 17:12:16.066103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.137 [2024-11-20 17:12:16.066109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.137 [2024-11-20 17:12:16.071496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.137 [2024-11-20 17:12:16.071515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.137 [2024-11-20 17:12:16.071521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.137 [2024-11-20 17:12:16.079617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.137 [2024-11-20 17:12:16.079635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.079641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.084219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.084243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.088638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.088655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.088662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.097178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.097196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.097202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.104077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.104096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.104102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.108301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.108319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.108328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.116204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.116222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.116228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.121888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.121906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.121912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.127599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.127617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.127623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.132192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.132210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.132216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.138856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.138873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.138879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.145266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.145284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.145291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.151670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.151688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.151695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.156077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.156095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.156101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.162976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.162996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.163003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.168636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.168654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.168660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.172989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.173006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.173013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.181880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.181898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.181904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.184336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.184353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.184359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.191442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.191459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.191466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.199128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.199145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.199152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.204745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.204763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.204769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.213492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.213509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.213515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.221424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.221443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.221449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.232031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.232049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.232055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.243374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.243393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.243399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.254739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.138 [2024-11-20 17:12:16.254756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.138 [2024-11-20 17:12:16.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.138 [2024-11-20 17:12:16.265336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.139 [2024-11-20 17:12:16.265355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.139 [2024-11-20 17:12:16.265361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.139 [2024-11-20 17:12:16.271733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.139 [2024-11-20 17:12:16.271751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.139 [2024-11-20 17:12:16.271757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.139 [2024-11-20 17:12:16.278629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.139 [2024-11-20 17:12:16.278647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.139 [2024-11-20 17:12:16.278653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.139 [2024-11-20 17:12:16.289191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.139 [2024-11-20 17:12:16.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.139 [2024-11-20 17:12:16.289215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.139 [2024-11-20 17:12:16.299319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.139 [2024-11-20 17:12:16.299338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.139 [2024-11-20 17:12:16.299347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.139 [2024-11-20 17:12:16.309742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.139 [2024-11-20 17:12:16.309761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.139 [2024-11-20 17:12:16.309767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.319008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.319027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.319033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.329443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.329461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.329467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.338713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.338732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.338738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.345728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.345746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.345752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.352107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.352126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.352132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.355443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.355460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.355466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.363963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.363980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.363986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.375154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.375176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.375183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.385960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.385978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.385984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.394130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.394147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.394154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.401589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.401606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.401612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.409594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.409612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.409618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.418556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.418574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.418581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.425705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.425723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.425729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.432135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.432153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.432164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.438499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.438516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.438526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.446129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.446146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.446153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.454807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.454825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.454831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.401 [2024-11-20 17:12:16.463368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.401 [2024-11-20 17:12:16.463386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.401 [2024-11-20 17:12:16.463392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.473874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.473891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.473898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.486181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.486199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.486205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.497071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.497089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.497095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.507390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.507407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.507414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.517585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.517603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.517610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.529617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.529638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.529644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.539508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.539526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.539533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.548885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.548902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.548908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.559381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.559399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.559405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.402 [2024-11-20 17:12:16.572037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.402 [2024-11-20 17:12:16.572055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.402 [2024-11-20 17:12:16.572062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.582086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.582103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.582109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.587425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.587443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.587449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.596370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.596387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.596393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.605488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.605505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.605511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.612746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.612763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.612769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.622416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.622434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.622441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.663 [2024-11-20 17:12:16.631672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.663 [2024-11-20 17:12:16.631690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.663 [2024-11-20 17:12:16.631696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.664 [2024-11-20 17:12:16.641351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.664 [2024-11-20 17:12:16.641368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.664 [2024-11-20 17:12:16.641374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.664 [2024-11-20 17:12:16.648736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.664 [2024-11-20 17:12:16.648754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.664 [2024-11-20 17:12:16.648760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.664 [2024-11-20 17:12:16.658251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.664 [2024-11-20 17:12:16.658268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.664 [2024-11-20 17:12:16.658274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.664 4007.00 IOPS, 500.88 MiB/s [2024-11-20T16:12:16.840Z] [2024-11-20 17:12:16.668400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b6e0) 00:29:24.664 [2024-11-20 17:12:16.668418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.664 [2024-11-20 17:12:16.668424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.664 00:29:24.664 Latency(us) 00:29:24.664 [2024-11-20T16:12:16.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.664 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:24.664 nvme0n1 : 2.00 4008.53 501.07 0.00 0.00 3987.33 481.28 16930.13 00:29:24.664 [2024-11-20T16:12:16.840Z] =================================================================================================================== 00:29:24.664 [2024-11-20T16:12:16.840Z] Total : 4008.53 501.07 0.00 0.00 3987.33 481.28 16930.13 00:29:24.664 { 00:29:24.664 "results": [ 00:29:24.664 { 00:29:24.664 "job": "nvme0n1", 00:29:24.664 "core_mask": "0x2", 00:29:24.664 "workload": "randread", 00:29:24.664 "status": "finished", 00:29:24.664 "queue_depth": 16, 00:29:24.664 "io_size": 131072, 00:29:24.664 "runtime": 2.003229, 00:29:24.664 "iops": 4008.5282311707747, 00:29:24.664 "mibps": 501.06602889634684, 00:29:24.664 "io_failed": 0, 00:29:24.664 "io_timeout": 0, 00:29:24.664 "avg_latency_us": 3987.325715234537, 00:29:24.664 "min_latency_us": 481.28, 00:29:24.664 "max_latency_us": 16930.133333333335 00:29:24.664 } 00:29:24.664 ], 00:29:24.664 "core_count": 1 00:29:24.664 } 00:29:24.664 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.664 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.664 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.664 | .driver_specific 00:29:24.664 | .nvme_error 00:29:24.664 | .status_code 00:29:24.664 | .command_transient_transport_error' 00:29:24.664 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 260 > 0 )) 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2140007 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2140007 ']' 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2140007 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140007 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140007' 00:29:24.925 killing process with pid 2140007 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2140007 00:29:24.925 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.925 00:29:24.925 Latency(us) 00:29:24.925 [2024-11-20T16:12:17.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.925 [2024-11-20T16:12:17.101Z] =================================================================================================================== 00:29:24.925 [2024-11-20T16:12:17.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.925 17:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2140007 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2140782 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2140782 /var/tmp/bperf.sock 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2140782 ']' 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.925 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.925 [2024-11-20 17:12:17.093826] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:24.925 [2024-11-20 17:12:17.093885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140782 ] 00:29:25.186 [2024-11-20 17:12:17.177374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.186 [2024-11-20 17:12:17.206578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.756 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.756 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:25.756 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.756 17:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.017 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:26.017 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.017 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.017 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.017 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.017 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.591 nvme0n1 00:29:26.591 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:26.591 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.591 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.591 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.591 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.591 17:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.591 Running I/O for 2 seconds... 00:29:26.591 [2024-11-20 17:12:18.590386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7da8 00:29:26.591 [2024-11-20 17:12:18.591164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.591191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.599264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:26.591 [2024-11-20 17:12:18.600020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.600039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.607826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef9f68 00:29:26.591 [2024-11-20 17:12:18.608575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.608592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.616379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efb048 00:29:26.591 [2024-11-20 17:12:18.617118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.617134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.624924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea680 00:29:26.591 [2024-11-20 17:12:18.625682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.625699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.633450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:26.591 [2024-11-20 17:12:18.634181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.634198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.641958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec840 00:29:26.591 [2024-11-20 17:12:18.642715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.642732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.650527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed920 00:29:26.591 [2024-11-20 17:12:18.651258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.651275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.659063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeea00 00:29:26.591 [2024-11-20 17:12:18.659812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.667619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eefae0 00:29:26.591 [2024-11-20 17:12:18.668335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.668355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.676118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0bc0 00:29:26.591 [2024-11-20 17:12:18.676854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.676870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.684617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1ca0 00:29:26.591 [2024-11-20 17:12:18.685321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.685338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.693104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2d80 00:29:26.591 [2024-11-20 17:12:18.693856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.693872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.701624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3e60 00:29:26.591 [2024-11-20 17:12:18.702347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.702364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.710126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4f40 00:29:26.591 [2024-11-20 17:12:18.710825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.591 [2024-11-20 17:12:18.710841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.591 [2024-11-20 17:12:18.718619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6020 00:29:26.592 [2024-11-20 17:12:18.719325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.592 [2024-11-20 17:12:18.719342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.592 [2024-11-20 17:12:18.727100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7100 00:29:26.592 [2024-11-20 17:12:18.727840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.592 [2024-11-20 17:12:18.727857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.592 [2024-11-20 17:12:18.735597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef81e0 00:29:26.592 [2024-11-20 17:12:18.736320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.592 [2024-11-20 17:12:18.736336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.592 [2024-11-20 17:12:18.744123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef92c0 00:29:26.592 [2024-11-20 17:12:18.744827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.592 [2024-11-20 17:12:18.744843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.592 [2024-11-20 17:12:18.752624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efa3a0 00:29:26.592 [2024-11-20 17:12:18.753319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.592 [2024-11-20 17:12:18.753335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.592 [2024-11-20 17:12:18.761107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:26.592 [2024-11-20 17:12:18.761803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.592 [2024-11-20 17:12:18.761819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.769596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb328 00:29:26.854 [2024-11-20 17:12:18.770323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.770339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.778105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec408 00:29:26.854 [2024-11-20 17:12:18.778848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.778864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.786623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed4e8 00:29:26.854 [2024-11-20 17:12:18.787382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.787398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.795135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eee5c8 00:29:26.854 [2024-11-20 17:12:18.795872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.795887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.803640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eef6a8 00:29:26.854 [2024-11-20 17:12:18.804399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.804415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.812122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0788 00:29:26.854 [2024-11-20 17:12:18.812871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.820600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1868 00:29:26.854 [2024-11-20 17:12:18.821322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.821338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.829122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2948 00:29:26.854 [2024-11-20 17:12:18.829858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.829874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.837657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3a28 00:29:26.854 [2024-11-20 17:12:18.838391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.838407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.846175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4b08 00:29:26.854 [2024-11-20 17:12:18.846906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.846922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.854668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef5be8 00:29:26.854 [2024-11-20 17:12:18.855368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.855385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.863179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6cc8 00:29:26.854 [2024-11-20 17:12:18.863920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.863935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.871668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7da8 00:29:26.854 [2024-11-20 17:12:18.872370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.872386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.880149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:26.854 [2024-11-20 17:12:18.880904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.880919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.888644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef9f68 00:29:26.854 [2024-11-20 17:12:18.889381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.889400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.897113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efb048 00:29:26.854 [2024-11-20 17:12:18.897865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.905614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea680 00:29:26.854 [2024-11-20 17:12:18.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.906336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.914102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:26.854 [2024-11-20 17:12:18.914807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.914823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.922575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec840 00:29:26.854 [2024-11-20 17:12:18.923320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.923335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.931060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed920 00:29:26.854 [2024-11-20 17:12:18.931793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.931809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.854 [2024-11-20 17:12:18.939558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeea00 00:29:26.854 [2024-11-20 17:12:18.940305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.854 [2024-11-20 17:12:18.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.948030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eefae0 00:29:26.855 [2024-11-20 17:12:18.948787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.948803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.956519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0bc0 00:29:26.855 [2024-11-20 17:12:18.957268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.957284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.965023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1ca0 00:29:26.855 [2024-11-20 17:12:18.965784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.965799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.973537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2d80 00:29:26.855 [2024-11-20 17:12:18.974287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.974304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.982019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3e60 00:29:26.855 [2024-11-20 17:12:18.982760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.982776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.990499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4f40 00:29:26.855 [2024-11-20 17:12:18.991221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.991236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:18.998989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6020 00:29:26.855 [2024-11-20 17:12:18.999730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:18.999747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:19.007500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7100 00:29:26.855 [2024-11-20 17:12:19.008230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:19.008246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:19.015994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef81e0 00:29:26.855 [2024-11-20 17:12:19.016758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:19.016775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:26.855 [2024-11-20 17:12:19.024497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef92c0 00:29:26.855 [2024-11-20 17:12:19.025222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.855 [2024-11-20 17:12:19.025237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.032990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efa3a0 00:29:27.116 [2024-11-20 17:12:19.033724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.033740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.041482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.116 [2024-11-20 17:12:19.042232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.042248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.049965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb328 00:29:27.116 [2024-11-20 17:12:19.050718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.050733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.058469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec408 00:29:27.116 [2024-11-20 17:12:19.059164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.059180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.066962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed4e8 00:29:27.116 [2024-11-20 17:12:19.067700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.075460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eee5c8 00:29:27.116 [2024-11-20 17:12:19.076210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.076227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.083940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eef6a8 00:29:27.116 [2024-11-20 17:12:19.084676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.084692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.092433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0788 00:29:27.116 [2024-11-20 17:12:19.093174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.116 [2024-11-20 17:12:19.093190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.116 [2024-11-20 17:12:19.100936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1868 00:29:27.117 [2024-11-20 17:12:19.101674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.101690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.109454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2948 00:29:27.117 [2024-11-20 17:12:19.110199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.110218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.117937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3a28 00:29:27.117 [2024-11-20 17:12:19.118680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.118696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.126429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4b08 00:29:27.117 [2024-11-20 17:12:19.127177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.127193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.134893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef5be8 00:29:27.117 [2024-11-20 17:12:19.135648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.135664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.143431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6cc8 00:29:27.117 [2024-11-20 17:12:19.144183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.144199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.151942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7da8 00:29:27.117 [2024-11-20 17:12:19.152701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.152717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.160422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:27.117 [2024-11-20 17:12:19.161155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.161174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.168903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef9f68 00:29:27.117 [2024-11-20 17:12:19.169651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.169667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.177402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efb048 00:29:27.117 [2024-11-20 17:12:19.178119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.178135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.185872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea680 00:29:27.117 [2024-11-20 17:12:19.186631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.186647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.194387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:27.117 [2024-11-20 17:12:19.195113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.195129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.202902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec840 00:29:27.117 [2024-11-20 17:12:19.203655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.203671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.211386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed920 00:29:27.117 [2024-11-20 17:12:19.212121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.212137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.219876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeea00 00:29:27.117 [2024-11-20 17:12:19.220589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.220605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.228333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eefae0 00:29:27.117 [2024-11-20 17:12:19.229076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.229092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.236825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0bc0 00:29:27.117 [2024-11-20 17:12:19.237585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.237601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.245315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1ca0 00:29:27.117 [2024-11-20 17:12:19.246059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.246075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.253803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2d80 00:29:27.117 [2024-11-20 17:12:19.254560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.254576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.262284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3e60 00:29:27.117 [2024-11-20 17:12:19.263026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.263042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.270768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4f40 00:29:27.117 [2024-11-20 17:12:19.271523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.271538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.279256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6020 00:29:27.117 [2024-11-20 17:12:19.279964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.279980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.117 [2024-11-20 17:12:19.287751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7100 00:29:27.117 [2024-11-20 17:12:19.288499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.117 [2024-11-20 17:12:19.288514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.296236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef81e0 00:29:27.380 [2024-11-20 17:12:19.296966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.296982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.304733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef92c0 00:29:27.380 [2024-11-20 17:12:19.305433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.313199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efa3a0 00:29:27.380 [2024-11-20 17:12:19.313936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.313951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.321678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.380 [2024-11-20 17:12:19.322425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.322440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.330186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb328 00:29:27.380 [2024-11-20 17:12:19.330895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.330913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.338713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec408 00:29:27.380 [2024-11-20 17:12:19.339440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.339456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.347210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed4e8 00:29:27.380 [2024-11-20 17:12:19.347940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.347955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.355691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eee5c8 00:29:27.380 [2024-11-20 17:12:19.356432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.356448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.364176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eef6a8 00:29:27.380 [2024-11-20 17:12:19.364898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.364914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.372666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0788 00:29:27.380 [2024-11-20 17:12:19.373400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.373415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.381163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1868 00:29:27.380 [2024-11-20 17:12:19.381902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.381917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.389670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2948 00:29:27.380 [2024-11-20 17:12:19.390438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.390453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.398166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3a28 00:29:27.380 [2024-11-20 17:12:19.398878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.398894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.406628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4b08 00:29:27.380 [2024-11-20 17:12:19.407370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.407385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.415120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef5be8 00:29:27.380 [2024-11-20 17:12:19.415832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.380 [2024-11-20 17:12:19.415847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.380 [2024-11-20 17:12:19.423631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6cc8 00:29:27.380 [2024-11-20 17:12:19.424387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.424403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.432111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7da8 00:29:27.381 [2024-11-20 17:12:19.432819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.432835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.440605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:27.381 [2024-11-20 17:12:19.441224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.441240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.449089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef9f68 00:29:27.381 [2024-11-20 17:12:19.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.449849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.457567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efb048 00:29:27.381 [2024-11-20 17:12:19.458320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.458336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.466079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea680 00:29:27.381 [2024-11-20 17:12:19.466814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.466830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.474575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:27.381 [2024-11-20 17:12:19.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.475318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.483069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec840 00:29:27.381 [2024-11-20 17:12:19.483820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.483835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.491562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eed920 00:29:27.381 [2024-11-20 17:12:19.492300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.492316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.500100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeea00 00:29:27.381 [2024-11-20 17:12:19.500837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.500852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.508591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eefae0 00:29:27.381 [2024-11-20 17:12:19.509321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.509337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.517240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0bc0 00:29:27.381 [2024-11-20 17:12:19.517991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.518007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.525731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef1ca0 00:29:27.381 [2024-11-20 17:12:19.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.526447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.534219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2d80 00:29:27.381 [2024-11-20 17:12:19.534959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.534974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.542692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef3e60 00:29:27.381 [2024-11-20 17:12:19.543449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.543464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.381 [2024-11-20 17:12:19.551190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4f40 00:29:27.381 [2024-11-20 17:12:19.551939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.381 [2024-11-20 17:12:19.551957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.559684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6020 00:29:27.644 [2024-11-20 17:12:19.560428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.560444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.568174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef7100 00:29:27.644 [2024-11-20 17:12:19.568909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.568924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.576681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef81e0 00:29:27.644 [2024-11-20 17:12:19.577581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.577597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:27.644 29781.00 IOPS, 116.33 MiB/s [2024-11-20T16:12:19.820Z] [2024-11-20 17:12:19.585148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efa3a0 00:29:27.644 [2024-11-20 17:12:19.585884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.585900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.593610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee99d8 00:29:27.644 [2024-11-20 17:12:19.594340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.594356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.602095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eebfd0 00:29:27.644 [2024-11-20 17:12:19.602831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.602847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.610599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eee190 00:29:27.644 [2024-11-20 17:12:19.611370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.611386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.619105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef0350 00:29:27.644 [2024-11-20 17:12:19.619804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.619820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.627583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef2510 00:29:27.644 [2024-11-20 17:12:19.628327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.628343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.636063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef46d0 00:29:27.644 [2024-11-20 17:12:19.636758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.636774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.645595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6890 00:29:27.644 [2024-11-20 17:12:19.646764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.646779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.653120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efcdd0 00:29:27.644 [2024-11-20 17:12:19.653777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.653794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.661907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef6020 00:29:27.644 [2024-11-20 17:12:19.662719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.662734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.670429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ede470 00:29:27.644 [2024-11-20 17:12:19.671256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.671271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.678980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efdeb0 00:29:27.644 [2024-11-20 17:12:19.679801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.679817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.687455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc998 00:29:27.644 [2024-11-20 17:12:19.688264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.688280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.695930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efb8b8 00:29:27.644 [2024-11-20 17:12:19.696770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.696786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.703851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eefae0 00:29:27.644 [2024-11-20 17:12:19.704672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.704688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.713242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee1710 00:29:27.644 [2024-11-20 17:12:19.714134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.644 [2024-11-20 17:12:19.714149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:27.644 [2024-11-20 17:12:19.721187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee12d8 00:29:27.644 [2024-11-20 17:12:19.721997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.722012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.729817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee1710 00:29:27.645 [2024-11-20 17:12:19.730640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.738314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee12d8 00:29:27.645 [2024-11-20 17:12:19.739121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.739137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.746797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee1710 00:29:27.645 [2024-11-20 17:12:19.747579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.747595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.756024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.645 [2024-11-20 17:12:19.756954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.756970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.764493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.645 [2024-11-20 17:12:19.765387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.765403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.773001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efe720 00:29:27.645 [2024-11-20 17:12:19.773860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.773879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.781534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.645 [2024-11-20 17:12:19.782443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.782459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.790042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efeb58 00:29:27.645 [2024-11-20 17:12:19.790943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.790959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.798556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efd640 00:29:27.645 [2024-11-20 17:12:19.799450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.799466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.807089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.645 [2024-11-20 17:12:19.808005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.808020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.645 [2024-11-20 17:12:19.815611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.645 [2024-11-20 17:12:19.816508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.645 [2024-11-20 17:12:19.816524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.824148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efe720 00:29:27.907 [2024-11-20 17:12:19.825057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.825073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.832671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.907 [2024-11-20 17:12:19.833572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.833588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.841213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efeb58 00:29:27.907 [2024-11-20 17:12:19.842136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.842151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.849782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efd640 00:29:27.907 [2024-11-20 17:12:19.850685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.850701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.858308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.907 [2024-11-20 17:12:19.859204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.859220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.866818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.907 [2024-11-20 17:12:19.867726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.867742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.907 [2024-11-20 17:12:19.875356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efe720 00:29:27.907 [2024-11-20 17:12:19.876251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.907 [2024-11-20 17:12:19.876268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.883866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.908 [2024-11-20 17:12:19.884767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.884783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.892388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efeb58 00:29:27.908 [2024-11-20 17:12:19.893283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.893299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.900904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efd640 00:29:27.908 [2024-11-20 17:12:19.901807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.901824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.909411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.908 [2024-11-20 17:12:19.910299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.910315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.917907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.908 [2024-11-20 17:12:19.918809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.918826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.926420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efe720 00:29:27.908 [2024-11-20 17:12:19.927319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.927335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.934949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.908 [2024-11-20 17:12:19.935853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.935868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.943477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efeb58 00:29:27.908 [2024-11-20 17:12:19.944371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.944386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.951984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efd640 00:29:27.908 [2024-11-20 17:12:19.952845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.952861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.960506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.908 [2024-11-20 17:12:19.961366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.961382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.969000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.908 [2024-11-20 17:12:19.969897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.969912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.977532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efe720 00:29:27.908 [2024-11-20 17:12:19.978438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.978454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.986033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.908 [2024-11-20 17:12:19.986945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.986961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:19.994594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efeb58 00:29:27.908 [2024-11-20 17:12:19.995492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:19.995513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.003616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efd640 00:29:27.908 [2024-11-20 17:12:20.004511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.004528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.012135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.908 [2024-11-20 17:12:20.013058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.013075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.020644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.908 [2024-11-20 17:12:20.021544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.021560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.029189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efe720 00:29:27.908 [2024-11-20 17:12:20.030089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.030105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.037725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eea248 00:29:27.908 [2024-11-20 17:12:20.038633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.038648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.046260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efeb58 00:29:27.908 [2024-11-20 17:12:20.047151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.047172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.054794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efd640 00:29:27.908 [2024-11-20 17:12:20.055689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.055705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.063319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:27.908 [2024-11-20 17:12:20.064182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.064198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.908 [2024-11-20 17:12:20.071836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc560 00:29:27.908 [2024-11-20 17:12:20.072602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.908 [2024-11-20 17:12:20.072619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:27.909 [2024-11-20 17:12:20.080108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef4b08 00:29:28.170 [2024-11-20 17:12:20.080749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.170 [2024-11-20 17:12:20.080766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.170 [2024-11-20 17:12:20.088814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee4de8 00:29:28.170 [2024-11-20 17:12:20.089591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.170 [2024-11-20 17:12:20.089607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.170 [2024-11-20 17:12:20.097337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc128 00:29:28.170 [2024-11-20 17:12:20.098070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.170 [2024-11-20 17:12:20.098085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.170 [2024-11-20 17:12:20.105830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eee5c8 00:29:28.170 [2024-11-20 17:12:20.106683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.170 [2024-11-20 17:12:20.106699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.170 [2024-11-20 17:12:20.114355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee4de8 00:29:28.170 [2024-11-20 17:12:20.115141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.115157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.123992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efc128 00:29:28.171 [2024-11-20 17:12:20.125192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.125208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.132078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eddc00 00:29:28.171 [2024-11-20 17:12:20.133152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.133171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.140184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edfdc0 00:29:28.171 [2024-11-20 17:12:20.141148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.141167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.149310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edfdc0 00:29:28.171 [2024-11-20 17:12:20.150619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.150634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.157939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efdeb0 00:29:28.171 [2024-11-20 17:12:20.159211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.159227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.165018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:28.171 [2024-11-20 17:12:20.165665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.165680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.173441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:28.171 [2024-11-20 17:12:20.174095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.174110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.181942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef92c0 00:29:28.171 [2024-11-20 17:12:20.182600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.190428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:28.171 [2024-11-20 17:12:20.190932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.190947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.198905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:28.171 [2024-11-20 17:12:20.199559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.199577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.207453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef92c0 00:29:28.171 [2024-11-20 17:12:20.208092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.208108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.215951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef8e88 00:29:28.171 [2024-11-20 17:12:20.216596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.216615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.224458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:28.171 [2024-11-20 17:12:20.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.225147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.233667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef92c0 00:29:28.171 [2024-11-20 17:12:20.234649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.234665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.242287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edf550 00:29:28.171 [2024-11-20 17:12:20.243306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.243322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.250826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eebb98 00:29:28.171 [2024-11-20 17:12:20.251801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.251817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.259347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee6300 00:29:28.171 [2024-11-20 17:12:20.260384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.260400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.267864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efac10 00:29:28.171 [2024-11-20 17:12:20.268885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.268901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.276400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee01f8 00:29:28.171 [2024-11-20 17:12:20.277448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.277464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.284930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee73e0 00:29:28.171 [2024-11-20 17:12:20.285938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.285954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.293437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edf550 00:29:28.171 [2024-11-20 17:12:20.294433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.294449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.301955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eebb98 00:29:28.171 [2024-11-20 17:12:20.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.302993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.310497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee6300 00:29:28.171 [2024-11-20 17:12:20.311519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.311534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.319050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efac10 00:29:28.171 [2024-11-20 17:12:20.320085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.320101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.327562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee01f8 00:29:28.171 [2024-11-20 17:12:20.328584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.328600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.171 [2024-11-20 17:12:20.336055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee73e0 00:29:28.171 [2024-11-20 17:12:20.337083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.171 [2024-11-20 17:12:20.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.344560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edf550 00:29:28.433 [2024-11-20 17:12:20.345588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.345604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.353075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eebb98 00:29:28.433 [2024-11-20 17:12:20.354114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.354130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.361622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee6300 00:29:28.433 [2024-11-20 17:12:20.362663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.362680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.370144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efac10 00:29:28.433 [2024-11-20 17:12:20.371182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.371198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.378667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee01f8 00:29:28.433 [2024-11-20 17:12:20.379658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.379674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.387171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee73e0 00:29:28.433 [2024-11-20 17:12:20.388193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.395679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edf550 00:29:28.433 [2024-11-20 17:12:20.396707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.396723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.404228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eebb98 00:29:28.433 [2024-11-20 17:12:20.405243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.405259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.412751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee6300 00:29:28.433 [2024-11-20 17:12:20.413765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.413781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.421274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efac10 00:29:28.433 [2024-11-20 17:12:20.422287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.422303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.429804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee01f8 00:29:28.433 [2024-11-20 17:12:20.430839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.430855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.437720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee2c28 00:29:28.433 [2024-11-20 17:12:20.438703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.438721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.447072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edf118 00:29:28.433 [2024-11-20 17:12:20.448064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.448080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.455607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee8d30 00:29:28.433 [2024-11-20 17:12:20.456623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.456639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.433 [2024-11-20 17:12:20.464096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef5be8 00:29:28.433 [2024-11-20 17:12:20.465100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.433 [2024-11-20 17:12:20.465116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.472584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee1710 00:29:28.434 [2024-11-20 17:12:20.473597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.473613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.481069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efef90 00:29:28.434 [2024-11-20 17:12:20.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.482091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.489564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eec408 00:29:28.434 [2024-11-20 17:12:20.490564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.490580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.498087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efb480 00:29:28.434 [2024-11-20 17:12:20.499095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.499112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.506683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee8088 00:29:28.434 [2024-11-20 17:12:20.507693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.507709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.515340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee4578 00:29:28.434 [2024-11-20 17:12:20.516340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.516359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.523837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eddc00 00:29:28.434 [2024-11-20 17:12:20.524787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.524803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.532308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efa3a0 00:29:28.434 [2024-11-20 17:12:20.533318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.533333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.540799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016edf118 00:29:28.434 [2024-11-20 17:12:20.541810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.541826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.549324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee8d30 00:29:28.434 [2024-11-20 17:12:20.550309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.550325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.557792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ef5be8 00:29:28.434 [2024-11-20 17:12:20.558779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.558795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.566276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016ee1710 00:29:28.434 [2024-11-20 17:12:20.567281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.567296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 [2024-11-20 17:12:20.574765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016efef90 00:29:28.434 [2024-11-20 17:12:20.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.575731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:28.434 29892.50 IOPS, 116.77 MiB/s [2024-11-20T16:12:20.610Z] [2024-11-20 17:12:20.582975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13853d0) with pdu=0x200016eeb760 00:29:28.434 [2024-11-20 17:12:20.583964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.434 [2024-11-20 17:12:20.583979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:28.434 00:29:28.434 Latency(us) 00:29:28.434 [2024-11-20T16:12:20.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.434 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.434 nvme0n1 : 2.01 29897.29 116.79 0.00 0.00 4276.25 2116.27 14308.69 00:29:28.434 [2024-11-20T16:12:20.610Z] =================================================================================================================== 00:29:28.434 [2024-11-20T16:12:20.610Z] Total : 29897.29 116.79 0.00 0.00 4276.25 2116.27 14308.69 00:29:28.434 { 00:29:28.434 "results": [ 00:29:28.434 { 00:29:28.434 "job": "nvme0n1", 00:29:28.434 "core_mask": "0x2", 00:29:28.434 "workload": "randwrite", 00:29:28.434 "status": "finished", 00:29:28.434 "queue_depth": 128, 00:29:28.434 "io_size": 4096, 00:29:28.434 "runtime": 2.006001, 00:29:28.434 "iops": 29897.2931718379, 00:29:28.434 "mibps": 116.7863014524918, 00:29:28.434 "io_failed": 0, 00:29:28.434 "io_timeout": 0, 00:29:28.434 "avg_latency_us": 4276.253754404685, 00:29:28.434 "min_latency_us": 2116.266666666667, 00:29:28.434 "max_latency_us": 14308.693333333333 00:29:28.434 } 00:29:28.434 ], 00:29:28.434 "core_count": 1 00:29:28.434 } 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.696 | .driver_specific 00:29:28.696 | .nvme_error 00:29:28.696 | .status_code 00:29:28.696 | .command_transient_transport_error' 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2140782 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2140782 ']' 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2140782 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140782 00:29:28.696 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140782' 00:29:28.957 killing process with pid 2140782 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2140782 00:29:28.957 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.957 00:29:28.957 Latency(us) 00:29:28.957 [2024-11-20T16:12:21.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.957 [2024-11-20T16:12:21.133Z] =================================================================================================================== 00:29:28.957 [2024-11-20T16:12:21.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2140782 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2141467 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2141467 /var/tmp/bperf.sock 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2141467 ']' 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.957 17:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.957 [2024-11-20 17:12:21.028013] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:28.957 [2024-11-20 17:12:21.028070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141467 ] 00:29:28.957 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.957 Zero copy mechanism will not be used. 00:29:28.957 [2024-11-20 17:12:21.112524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.217 [2024-11-20 17:12:21.140701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.790 17:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.790 17:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:29.790 17:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.790 17:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.050 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:30.050 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.050 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.050 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.050 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.050 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.313 nvme0n1 00:29:30.313 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:30.313 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.313 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.313 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.313 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.313 17:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.313 Zero copy mechanism will not be used. 00:29:30.313 Running I/O for 2 seconds... 00:29:30.313 [2024-11-20 17:12:22.407435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.407583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.407611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.415554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.415629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.415649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.421136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.421193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.421210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.427560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.427648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.427664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.436828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.436895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.436912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.447954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.448222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.448240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.458665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.458925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.458940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.470459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.470698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.470718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.313 [2024-11-20 17:12:22.481671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.313 [2024-11-20 17:12:22.481913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.313 [2024-11-20 17:12:22.481929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.493283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.493577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.493594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.504609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.504856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.504872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.516039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.516290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.516306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.527340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.527662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.527678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.539588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.539845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.539861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.551550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.551826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.551841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.562943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.563019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.573003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.573248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.573264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.583791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.576 [2024-11-20 17:12:22.584079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.576 [2024-11-20 17:12:22.584095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.576 [2024-11-20 17:12:22.594886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.595124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.595140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.606622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.606894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.606909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.616197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.616467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.623320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.623610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.623626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.629398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.629479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.629494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.634524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.634595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.634610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.642856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.643138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.643154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.651452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.651506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.651521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.658738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.658802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.658817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.667031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.667096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.667112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.675522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.675583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.675599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.684918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.685144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.685165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.692747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.692797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.692813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.701871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.702093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.702109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.708785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.709084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.709100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.716257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.716315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.716333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.725389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.725435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.725450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.731088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.731407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.731423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.736544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.736764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.736780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.577 [2024-11-20 17:12:22.744486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.577 [2024-11-20 17:12:22.744770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-11-20 17:12:22.744786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.750119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.750396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.750412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.758336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.758396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.758412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.765505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.765565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.765580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.775515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.775562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.775578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.783873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.783939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.783955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.794292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.794347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.794363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.802123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.802412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.802428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.810689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.810978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.810994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.820220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.820425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.820441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.840 [2024-11-20 17:12:22.829904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.840 [2024-11-20 17:12:22.830190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.840 [2024-11-20 17:12:22.830205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.836849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.837151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.837172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.844226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.844325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.844341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.852422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.852689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.852705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.859334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.859612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.859628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.868066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.868254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.874697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.874912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.874927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.882652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.882996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.883012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.889250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.889438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.889455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.896504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.896590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.896606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.905568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.905868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.905885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.912403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.912650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.912666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.921377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.921567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.921586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.930879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.931203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.931220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.938029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.938351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.938368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.944740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.944931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.944948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.951805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.951996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.952013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.958004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.958200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.958216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.964526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.964837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.964854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.972891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.973195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.973212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.979669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.979860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.979877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.987117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.987405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.987423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.841 [2024-11-20 17:12:22.993791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.841 [2024-11-20 17:12:22.993979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.841 [2024-11-20 17:12:22.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.842 [2024-11-20 17:12:23.002455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.842 [2024-11-20 17:12:23.002747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.842 [2024-11-20 17:12:23.002764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.842 [2024-11-20 17:12:23.008779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:30.842 [2024-11-20 17:12:23.009087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.842 [2024-11-20 17:12:23.009104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.015383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.015584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.015601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.023223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.023413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.023430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.028426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.028614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.028631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.033578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.033777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.033794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.038236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.038424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.038441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.043417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.043606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.043622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.048516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.048702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.048718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.055313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.055617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.055633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.062554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.062849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.062865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.071210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.071528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.071545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.079848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.080056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.080072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.090385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.090616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.090632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.095825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.096024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.096040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.102626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.102816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.102835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.111942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.112246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.112263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.118492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.118692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.118707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.124680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.124866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.124883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.131887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.132087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.132103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.139516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.139838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.104 [2024-11-20 17:12:23.139855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.104 [2024-11-20 17:12:23.144172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.104 [2024-11-20 17:12:23.144372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.144388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.149085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.149288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.149304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.153258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.153301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.153317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.162432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.162732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.162748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.168805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.168920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.168937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.173431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.173639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.173656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.179136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.179331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.179347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.184490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.184803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.184820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.190722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.190920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.190936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.195767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.195971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.199929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.200119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.200135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.203903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.204093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.204109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.210640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.210941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.210958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.219235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.219517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.219534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.224027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.224212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.224228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.228205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.228384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.228401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.232389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.232566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.232582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.235958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.236135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.236151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.239962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.240139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.240155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.244061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.244244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.244261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.247451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.247619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.247638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.250909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.251075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.105 [2024-11-20 17:12:23.251091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.105 [2024-11-20 17:12:23.254716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.105 [2024-11-20 17:12:23.255094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.106 [2024-11-20 17:12:23.255111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.106 [2024-11-20 17:12:23.258643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.106 [2024-11-20 17:12:23.258810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.106 [2024-11-20 17:12:23.258826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.106 [2024-11-20 17:12:23.261818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.106 [2024-11-20 17:12:23.261987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.106 [2024-11-20 17:12:23.262003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.106 [2024-11-20 17:12:23.266008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.106 [2024-11-20 17:12:23.266182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.106 [2024-11-20 17:12:23.266198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.106 [2024-11-20 17:12:23.270507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.106 [2024-11-20 17:12:23.270816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.106 [2024-11-20 17:12:23.270834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.106 [2024-11-20 17:12:23.275148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.106 [2024-11-20 17:12:23.275323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.106 [2024-11-20 17:12:23.275339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.278373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.278543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.278558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.281904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.282077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.282093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.286001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.286174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.286191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.289407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.289576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.289593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.293141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.293317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.293333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.296777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.296944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.296961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.300464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.300634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.300650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.303637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.303806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.303822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.306755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.306921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.306936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.310529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.310733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.310749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.313904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.314074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.314090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.318099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.318275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.318291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.321303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.321611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.321628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.324404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.324573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.324589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.327395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.327561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.327577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.330876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.369 [2024-11-20 17:12:23.331044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.369 [2024-11-20 17:12:23.331059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.369 [2024-11-20 17:12:23.336292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.336460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.336482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.339731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.339900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.339917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.344078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.344291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.344310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.348145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.348319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.348336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.351327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.351488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.351504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.354413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.354573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.354590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.357537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.357699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.357715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.361100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.361264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.361282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.365076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.365421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.365438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.370368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.370531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.370546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.378058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.378329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.378352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.381744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.381909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.381925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.385491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.385653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.385669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.391913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.392071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.392087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 4613.00 IOPS, 576.62 MiB/s [2024-11-20T16:12:23.546Z] [2024-11-20 17:12:23.400873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.401033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.401049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.404282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.404438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.404454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.408230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.408408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.408423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.412322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.412473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.412488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.415687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.415840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.415856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.419874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.420024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.420040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.423032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.423190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.423206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.426019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.426171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.426187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.428748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.428896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.428913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.432145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.432302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.432318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.435259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.435413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.435429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.438098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.438256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.438272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.370 [2024-11-20 17:12:23.441084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.370 [2024-11-20 17:12:23.441241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.370 [2024-11-20 17:12:23.441257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.443903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.444056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.444073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.446664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.446814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.446834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.449228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.449369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.449385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.451701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.451843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.451858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.454170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.454310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.454326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.456631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.456775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.456790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.459246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.459386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.459402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.461829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.461969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.461985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.464747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.464945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.464961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.469838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.470113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.470130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.477462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.477648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.477665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.480810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.480956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.480972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.484310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.484450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.484466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.490689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.490974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.490990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.494022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.494080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.494095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.499139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.499322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.499338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.502580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.502667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.502682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.505949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.506003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.506018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.509374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.509435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.509450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.513221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.513346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.519545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.519614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.519630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.524846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.525078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.525094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.529756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.529854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.529870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.535710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.535764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.535781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.371 [2024-11-20 17:12:23.539394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.371 [2024-11-20 17:12:23.539458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.371 [2024-11-20 17:12:23.539473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.542858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.542914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.542930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.546695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.546750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.546766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.550165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.550222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.550241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.554217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.554273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.554289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.558333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.558389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.558405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.562236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.562294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.562310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.566019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.566190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.566205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.569778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.569835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.569851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.573833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.578418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.578486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.578502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.582343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.582396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.582412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.634 [2024-11-20 17:12:23.587352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.634 [2024-11-20 17:12:23.587410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.634 [2024-11-20 17:12:23.587425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.590506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.590565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.590581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.595890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.596191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.596206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.605245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.605495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.605512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.610641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.610723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.610738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.614555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.614609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.614625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.618656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.618716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.618732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.621351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.621405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.621420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.624022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.624090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.624106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.626712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.626776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.626792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.629404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.629462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.629478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.632091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.632148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.632169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.634594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.634652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.637140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.637210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.637226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.639866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.639924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.639939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.643476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.643531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.643548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.648271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.648326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.648341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.650755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.650810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.650828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.653289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.653342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.653358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.656428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.656482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.656497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.662060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.662127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.662143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.669000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.669071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.669087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.672725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.672791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.678556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.678665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.678681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.685266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.685327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.685343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.690825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.690880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.690895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.694627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.694685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.694701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.635 [2024-11-20 17:12:23.698199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.635 [2024-11-20 17:12:23.698253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.635 [2024-11-20 17:12:23.698269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.701789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.701847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.701862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.705330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.705385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.705401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.709057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.709112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.709128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.714086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.714235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.714252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.719741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.719797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.719813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.723240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.723296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.723312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.726692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.726748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.726763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.732664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.732732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.732747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.736910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.737013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.737029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.743804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.743860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.743876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.747793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.747847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.747863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.751487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.751541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.751557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.754988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.755042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.755058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.758926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.758980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.758996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.762590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.762646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.762661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.765917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.765983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.766001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.769132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.769218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.769233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.772652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.772707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.772722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.776046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.776101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.776116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.779281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.779337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.779352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.782355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.782409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.782425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.785441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.785496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.785512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.788059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.788115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.788130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.790547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.790602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.790618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.793025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.793109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.793124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.795559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.795615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.636 [2024-11-20 17:12:23.795631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.636 [2024-11-20 17:12:23.798056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.636 [2024-11-20 17:12:23.798109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.637 [2024-11-20 17:12:23.798124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.637 [2024-11-20 17:12:23.800563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.637 [2024-11-20 17:12:23.800621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.637 [2024-11-20 17:12:23.800637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.637 [2024-11-20 17:12:23.803040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.637 [2024-11-20 17:12:23.803094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.637 [2024-11-20 17:12:23.803109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.637 [2024-11-20 17:12:23.805503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.637 [2024-11-20 17:12:23.805570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.637 [2024-11-20 17:12:23.805585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.807986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.808092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.808107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.811276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.811357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.811373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.819147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.819434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.819450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.828659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.828775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.828790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.838600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.838706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.838721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.844605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.844926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.844943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.853399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.853660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.853676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.862680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.862973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.862990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.872050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.872285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.872301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.882196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.882314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.882330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.890974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.891029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.891045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.900299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.900596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.900615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.909253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.909346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.909362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.912869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.912926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.899 [2024-11-20 17:12:23.912942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.899 [2024-11-20 17:12:23.915922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.899 [2024-11-20 17:12:23.915976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.915992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.920084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.920150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.920172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.925968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.926021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.926037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.929609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.929674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.929690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.932797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.932851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.932866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.937925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.937997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.938012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.943936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.944186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.944202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.949933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.950218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.950235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.955191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.955257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.955272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.958476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.958531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.958546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.961813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.961881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.961896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.965131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.965193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.965209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.968436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.968493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.968508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.971683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.971740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.971755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.975303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.975357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.975373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.978811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.978866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.978882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.981935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.981996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.982011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.986358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.986420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.986437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.990194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.990266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.993634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.993695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.993711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:23.998416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:23.998471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:23.998487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:24.002651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:24.002720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:24.002735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:24.005972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:24.006029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:24.006045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:24.009564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:24.009619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:24.009637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:24.013820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:24.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:24.013901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:24.018143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:24.018234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:24.018250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.900 [2024-11-20 17:12:24.022530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.900 [2024-11-20 17:12:24.022589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.900 [2024-11-20 17:12:24.022604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.025577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.025674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.025689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.029170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.029240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.029255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.036352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.036440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.036455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.042068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.042123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.042138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.046153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.046221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.046237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.049899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.049982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.049998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.059013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.059151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.059172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.901 [2024-11-20 17:12:24.065702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:31.901 [2024-11-20 17:12:24.065758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.901 [2024-11-20 17:12:24.065773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.071662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.071896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.071911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.079896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.079952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.079967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.083288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.083371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.083386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.087329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.087397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.087413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.094024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.094070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.094085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.098686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.098781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.098797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.106713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.106934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.106949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.116234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.116327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.116343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.125450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.125790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.125807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.135722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.135952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.135967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.145543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.145788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.145803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.155609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.155872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.155889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.166324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.166604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.166619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.176038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.176100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.178707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.178752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.178771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.181506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.181570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.181585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.184222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.184267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.184282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.186848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.186913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.186928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.189453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.189509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.189524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.192963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.193053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.193068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.195710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.195773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.195789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.198204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.198259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.198275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.200714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.200765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.163 [2024-11-20 17:12:24.200780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.163 [2024-11-20 17:12:24.203234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.163 [2024-11-20 17:12:24.203291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.203306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.205749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.205826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.208236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.208295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.208310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.210721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.210780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.210796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.213211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.213276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.213292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.215680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.215741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.215756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.218124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.218181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.218196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.221018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.221084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.221100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.226338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.226582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.226598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.236493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.236742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.236757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.247022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.247190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.247206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.256425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.256502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.256517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.259470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.259521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.259537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.262062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.262105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.262121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.265560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.265619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.265634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.271194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.271437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.271452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.274367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.274415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.274430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.279639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.279943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.279962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.283306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.283372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.283387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.286121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.286173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.286189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.288789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.288841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.288857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.291768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.291813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.291829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.294710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.294786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.294801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.298663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.298848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.298864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.302642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.302714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.302730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.305932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.305980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.305995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.309120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.309209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.164 [2024-11-20 17:12:24.309224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.164 [2024-11-20 17:12:24.311870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.164 [2024-11-20 17:12:24.311916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.165 [2024-11-20 17:12:24.311931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.165 [2024-11-20 17:12:24.314359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.165 [2024-11-20 17:12:24.314406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.165 [2024-11-20 17:12:24.314421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.165 [2024-11-20 17:12:24.317030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.165 [2024-11-20 17:12:24.317075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.165 [2024-11-20 17:12:24.317090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.165 [2024-11-20 17:12:24.322309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.165 [2024-11-20 17:12:24.322369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.165 [2024-11-20 17:12:24.322385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.165 [2024-11-20 17:12:24.330524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.165 [2024-11-20 17:12:24.330790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.165 [2024-11-20 17:12:24.330806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.335221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.335407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.335422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.341172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.341221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.341236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.347342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.347589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.347604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.354952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.355004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.355019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.361819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.361876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.361891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.367001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.367064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.367080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.372495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.372561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.372577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.375382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.375427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.375442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.378148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.378207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.378223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.381015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.381071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.381087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.426 [2024-11-20 17:12:24.383744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.426 [2024-11-20 17:12:24.383806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.426 [2024-11-20 17:12:24.383821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.427 [2024-11-20 17:12:24.386351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.427 [2024-11-20 17:12:24.386396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.427 [2024-11-20 17:12:24.386417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.427 [2024-11-20 17:12:24.392138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.427 [2024-11-20 17:12:24.392300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.427 [2024-11-20 17:12:24.392316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.427 [2024-11-20 17:12:24.398384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.427 [2024-11-20 17:12:24.398644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.427 [2024-11-20 17:12:24.398659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.427 5795.00 IOPS, 724.38 MiB/s [2024-11-20T16:12:24.603Z] [2024-11-20 17:12:24.402518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1385710) with pdu=0x200016eff3c8 00:29:32.427 [2024-11-20 17:12:24.402579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.427 [2024-11-20 17:12:24.402593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.427 00:29:32.427 Latency(us) 00:29:32.427 [2024-11-20T16:12:24.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.427 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:32.427 nvme0n1 : 2.00 5796.48 724.56 0.00 0.00 2756.99 1174.19 15291.73 00:29:32.427 [2024-11-20T16:12:24.603Z] =================================================================================================================== 00:29:32.427 [2024-11-20T16:12:24.603Z] Total : 5796.48 724.56 0.00 0.00 2756.99 1174.19 15291.73 00:29:32.427 { 00:29:32.427 "results": [ 00:29:32.427 { 00:29:32.427 "job": "nvme0n1", 00:29:32.427 "core_mask": "0x2", 00:29:32.427 "workload": "randwrite", 00:29:32.427 "status": "finished", 00:29:32.427 "queue_depth": 16, 00:29:32.427 "io_size": 131072, 00:29:32.427 "runtime": 2.00225, 00:29:32.427 "iops": 5796.478961168686, 00:29:32.427 "mibps": 724.5598701460857, 00:29:32.427 "io_failed": 0, 00:29:32.427 "io_timeout": 0, 00:29:32.427 "avg_latency_us": 2756.9945660290655, 00:29:32.427 "min_latency_us": 1174.1866666666667, 00:29:32.427 "max_latency_us": 15291.733333333334 00:29:32.427 } 00:29:32.427 ], 00:29:32.427 "core_count": 1 00:29:32.427 } 00:29:32.427 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.427 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.427 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.427 | .driver_specific 00:29:32.427 | .nvme_error 00:29:32.427 | .status_code 00:29:32.427 | .command_transient_transport_error' 00:29:32.427 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2141467 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2141467 ']' 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2141467 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141467 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141467' 00:29:32.687 killing process with pid 2141467 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2141467 00:29:32.687 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.687 00:29:32.687 Latency(us) 00:29:32.687 [2024-11-20T16:12:24.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.687 [2024-11-20T16:12:24.863Z] =================================================================================================================== 00:29:32.687 [2024-11-20T16:12:24.863Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2141467 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2138600 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2138600 ']' 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2138600 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138600 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138600' 00:29:32.687 killing process with pid 2138600 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2138600 00:29:32.687 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2138600 00:29:32.947 00:29:32.947 real 0m16.517s 00:29:32.947 user 0m32.659s 00:29:32.947 sys 0m3.698s 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.947 ************************************ 00:29:32.947 END TEST nvmf_digest_error 00:29:32.947 ************************************ 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.947 17:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.947 rmmod nvme_tcp 00:29:32.947 rmmod nvme_fabrics 00:29:32.947 rmmod nvme_keyring 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2138600 ']' 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2138600 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2138600 ']' 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2138600 00:29:32.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2138600) - No such process 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2138600 is not found' 00:29:32.947 Process with pid 2138600 is not found 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.947 17:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.995 17:12:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.995 00:29:34.995 real 0m43.518s 00:29:34.996 user 1m7.732s 00:29:34.996 sys 0m13.527s 00:29:34.996 17:12:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.996 17:12:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:34.996 ************************************ 00:29:34.996 END TEST nvmf_digest 00:29:34.996 ************************************ 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.305 ************************************ 00:29:35.305 START TEST nvmf_bdevperf 00:29:35.305 ************************************ 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:35.305 * Looking for test storage... 00:29:35.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:35.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.305 --rc genhtml_branch_coverage=1 00:29:35.305 --rc genhtml_function_coverage=1 00:29:35.305 --rc genhtml_legend=1 00:29:35.305 --rc geninfo_all_blocks=1 00:29:35.305 --rc geninfo_unexecuted_blocks=1 00:29:35.305 00:29:35.305 ' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:35.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.305 --rc genhtml_branch_coverage=1 00:29:35.305 --rc genhtml_function_coverage=1 00:29:35.305 --rc genhtml_legend=1 00:29:35.305 --rc geninfo_all_blocks=1 00:29:35.305 --rc geninfo_unexecuted_blocks=1 00:29:35.305 00:29:35.305 ' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:35.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.305 --rc genhtml_branch_coverage=1 00:29:35.305 --rc genhtml_function_coverage=1 00:29:35.305 --rc genhtml_legend=1 00:29:35.305 --rc geninfo_all_blocks=1 00:29:35.305 --rc geninfo_unexecuted_blocks=1 00:29:35.305 00:29:35.305 ' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:35.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.305 --rc genhtml_branch_coverage=1 00:29:35.305 --rc genhtml_function_coverage=1 00:29:35.305 --rc genhtml_legend=1 00:29:35.305 --rc geninfo_all_blocks=1 00:29:35.305 --rc geninfo_unexecuted_blocks=1 00:29:35.305 00:29:35.305 ' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.305 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.306 17:12:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:43.450 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.450 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:43.451 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:43.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:43.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:29:43.451 00:29:43.451 --- 10.0.0.2 ping statistics --- 00:29:43.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.451 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:29:43.451 00:29:43.451 --- 10.0.0.1 ping statistics --- 00:29:43.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.451 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2146494 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2146494 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:43.451 17:12:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2146494 ']' 00:29:43.451 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.451 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.451 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.451 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.451 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.451 [2024-11-20 17:12:35.057371] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:43.451 [2024-11-20 17:12:35.057443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.451 [2024-11-20 17:12:35.156206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.451 [2024-11-20 17:12:35.209126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.451 [2024-11-20 17:12:35.209181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.451 [2024-11-20 17:12:35.209190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.451 [2024-11-20 17:12:35.209198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.451 [2024-11-20 17:12:35.209207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.451 [2024-11-20 17:12:35.211015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.451 [2024-11-20 17:12:35.211201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.451 [2024-11-20 17:12:35.211258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.024 [2024-11-20 17:12:35.940100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.024 Malloc0 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.024 17:12:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.024 [2024-11-20 17:12:36.013369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:44.024 { 00:29:44.024 "params": { 00:29:44.024 "name": "Nvme$subsystem", 00:29:44.024 "trtype": "$TEST_TRANSPORT", 00:29:44.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.024 "adrfam": "ipv4", 00:29:44.024 "trsvcid": "$NVMF_PORT", 00:29:44.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.024 "hdgst": ${hdgst:-false}, 00:29:44.024 "ddgst": ${ddgst:-false} 00:29:44.024 }, 00:29:44.024 "method": "bdev_nvme_attach_controller" 00:29:44.024 } 00:29:44.024 EOF 00:29:44.024 )") 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:44.024 17:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:44.024 "params": { 00:29:44.024 "name": "Nvme1", 00:29:44.024 "trtype": "tcp", 00:29:44.024 "traddr": "10.0.0.2", 00:29:44.024 "adrfam": "ipv4", 00:29:44.024 "trsvcid": "4420", 00:29:44.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.024 "hdgst": false, 00:29:44.024 "ddgst": false 00:29:44.024 }, 00:29:44.024 "method": "bdev_nvme_attach_controller" 00:29:44.025 }' 00:29:44.025 [2024-11-20 17:12:36.071246] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:44.025 [2024-11-20 17:12:36.071321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146560 ] 00:29:44.025 [2024-11-20 17:12:36.153356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.285 [2024-11-20 17:12:36.207260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.545 Running I/O for 1 seconds... 00:29:45.488 8586.00 IOPS, 33.54 MiB/s 00:29:45.488 Latency(us) 00:29:45.488 [2024-11-20T16:12:37.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.488 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:45.488 Verification LBA range: start 0x0 length 0x4000 00:29:45.488 Nvme1n1 : 1.02 8655.71 33.81 0.00 0.00 14722.56 3017.39 12561.07 00:29:45.488 [2024-11-20T16:12:37.664Z] =================================================================================================================== 00:29:45.488 [2024-11-20T16:12:37.664Z] Total : 8655.71 33.81 0.00 0.00 14722.56 3017.39 12561.07 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2146875 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.750 { 00:29:45.750 "params": { 00:29:45.750 "name": "Nvme$subsystem", 00:29:45.750 "trtype": "$TEST_TRANSPORT", 00:29:45.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.750 "adrfam": "ipv4", 00:29:45.750 "trsvcid": "$NVMF_PORT", 00:29:45.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.750 "hdgst": ${hdgst:-false}, 00:29:45.750 "ddgst": ${ddgst:-false} 00:29:45.750 }, 00:29:45.750 "method": "bdev_nvme_attach_controller" 00:29:45.750 } 00:29:45.750 EOF 00:29:45.750 )") 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:45.750 17:12:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.750 "params": { 00:29:45.750 "name": "Nvme1", 00:29:45.750 "trtype": "tcp", 00:29:45.750 "traddr": "10.0.0.2", 00:29:45.750 "adrfam": "ipv4", 00:29:45.750 "trsvcid": "4420", 00:29:45.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.750 "hdgst": false, 00:29:45.750 "ddgst": false 00:29:45.750 }, 00:29:45.750 "method": "bdev_nvme_attach_controller" 00:29:45.750 }' 00:29:45.750 [2024-11-20 17:12:37.761537] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:45.750 [2024-11-20 17:12:37.761628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146875 ] 00:29:45.750 [2024-11-20 17:12:37.860154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.750 [2024-11-20 17:12:37.911797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.322 Running I/O for 15 seconds... 00:29:48.199 10389.00 IOPS, 40.58 MiB/s [2024-11-20T16:12:40.949Z] 10736.50 IOPS, 41.94 MiB/s [2024-11-20T16:12:40.949Z] 17:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2146494 00:29:48.773 17:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:48.773 [2024-11-20 17:12:40.720061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.773 [2024-11-20 17:12:40.720713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.773 [2024-11-20 17:12:40.720720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.720938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.774 [2024-11-20 17:12:40.720955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.774 [2024-11-20 17:12:40.720973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.774 [2024-11-20 17:12:40.720989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.720999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.774 [2024-11-20 17:12:40.721006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.774 [2024-11-20 17:12:40.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.774 [2024-11-20 17:12:40.721039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.774 [2024-11-20 17:12:40.721370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.774 [2024-11-20 17:12:40.721377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.775 [2024-11-20 17:12:40.721769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.721988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.721998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.722005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.722014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.722021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.722032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.722039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.775 [2024-11-20 17:12:40.722049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.775 [2024-11-20 17:12:40.722056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.776 [2024-11-20 17:12:40.722296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361140 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.722314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:48.776 [2024-11-20 17:12:40.722320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:48.776 [2024-11-20 17:12:40.722328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:29:48.776 [2024-11-20 17:12:40.722336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.776 [2024-11-20 17:12:40.722432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.776 [2024-11-20 17:12:40.722448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.776 [2024-11-20 17:12:40.722463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.776 [2024-11-20 17:12:40.722478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.776 [2024-11-20 17:12:40.722485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.726079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.776 [2024-11-20 17:12:40.726105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.776 [2024-11-20 17:12:40.726875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.776 [2024-11-20 17:12:40.726892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.776 [2024-11-20 17:12:40.726901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.727124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.776 [2024-11-20 17:12:40.727353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.776 [2024-11-20 17:12:40.727365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.776 [2024-11-20 17:12:40.727374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.776 [2024-11-20 17:12:40.727383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.776 [2024-11-20 17:12:40.740220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.776 [2024-11-20 17:12:40.740803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.776 [2024-11-20 17:12:40.740841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.776 [2024-11-20 17:12:40.740852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.741094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.776 [2024-11-20 17:12:40.741329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.776 [2024-11-20 17:12:40.741339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.776 [2024-11-20 17:12:40.741347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.776 [2024-11-20 17:12:40.741355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.776 [2024-11-20 17:12:40.754194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.776 [2024-11-20 17:12:40.754736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.776 [2024-11-20 17:12:40.754775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.776 [2024-11-20 17:12:40.754786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.755028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.776 [2024-11-20 17:12:40.755261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.776 [2024-11-20 17:12:40.755271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.776 [2024-11-20 17:12:40.755279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.776 [2024-11-20 17:12:40.755287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.776 [2024-11-20 17:12:40.768129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.776 [2024-11-20 17:12:40.768747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.776 [2024-11-20 17:12:40.768788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.776 [2024-11-20 17:12:40.768799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.769042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.776 [2024-11-20 17:12:40.769277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.776 [2024-11-20 17:12:40.769288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.776 [2024-11-20 17:12:40.769296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.776 [2024-11-20 17:12:40.769309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.776 [2024-11-20 17:12:40.781950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.776 [2024-11-20 17:12:40.782603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.776 [2024-11-20 17:12:40.782646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.776 [2024-11-20 17:12:40.782657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.776 [2024-11-20 17:12:40.782901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.783127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.783136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.783145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.783153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.795818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.796510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.796553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.796564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.796808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.797035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.797044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.797053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.797062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.809721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.810252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.810296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.810309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.810555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.810781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.810790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.810799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.810807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.823687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.824446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.824458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.824703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.824930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.824939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.824947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.824955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.837619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.838251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.838300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.838312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.838562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.838790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.838800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.838809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.838817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.851478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.852148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.852208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.852220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.852470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.852699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.852709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.852717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.852725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.865401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.866032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.866086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.866105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.866369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.866598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.866608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.866617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.866625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.879320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.880023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.880086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.880099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.880370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.880602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.880612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.777 [2024-11-20 17:12:40.880620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.777 [2024-11-20 17:12:40.880630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.777 [2024-11-20 17:12:40.893317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.777 [2024-11-20 17:12:40.893984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.777 [2024-11-20 17:12:40.894046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.777 [2024-11-20 17:12:40.894058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.777 [2024-11-20 17:12:40.894330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.777 [2024-11-20 17:12:40.894560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.777 [2024-11-20 17:12:40.894569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.778 [2024-11-20 17:12:40.894578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.778 [2024-11-20 17:12:40.894587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.778 [2024-11-20 17:12:40.907265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.778 [2024-11-20 17:12:40.907929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.778 [2024-11-20 17:12:40.907992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.778 [2024-11-20 17:12:40.908005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.778 [2024-11-20 17:12:40.908275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.778 [2024-11-20 17:12:40.908506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.778 [2024-11-20 17:12:40.908522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.778 [2024-11-20 17:12:40.908531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.778 [2024-11-20 17:12:40.908539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.778 [2024-11-20 17:12:40.921208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.778 [2024-11-20 17:12:40.921810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.778 [2024-11-20 17:12:40.921839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.778 [2024-11-20 17:12:40.921848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.778 [2024-11-20 17:12:40.922073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.778 [2024-11-20 17:12:40.922307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.778 [2024-11-20 17:12:40.922318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.778 [2024-11-20 17:12:40.922325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.778 [2024-11-20 17:12:40.922333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.778 [2024-11-20 17:12:40.935211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.778 [2024-11-20 17:12:40.935701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.778 [2024-11-20 17:12:40.935726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:48.778 [2024-11-20 17:12:40.935735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:48.778 [2024-11-20 17:12:40.935960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:48.778 [2024-11-20 17:12:40.936192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.778 [2024-11-20 17:12:40.936203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.778 [2024-11-20 17:12:40.936211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.778 [2024-11-20 17:12:40.936219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.040 [2024-11-20 17:12:40.949074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.040 [2024-11-20 17:12:40.949657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-11-20 17:12:40.949682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.040 [2024-11-20 17:12:40.949691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.040 [2024-11-20 17:12:40.949915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.040 [2024-11-20 17:12:40.950137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.040 [2024-11-20 17:12:40.950148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.040 [2024-11-20 17:12:40.950156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.040 [2024-11-20 17:12:40.950181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.040 [2024-11-20 17:12:40.963048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.040 [2024-11-20 17:12:40.963602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-11-20 17:12:40.963627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.040 [2024-11-20 17:12:40.963635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.040 [2024-11-20 17:12:40.963858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.040 [2024-11-20 17:12:40.964081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.040 [2024-11-20 17:12:40.964091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.040 [2024-11-20 17:12:40.964099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.040 [2024-11-20 17:12:40.964107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:40.977001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:40.977690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:40.977753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:40.977767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:40.978024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:40.978268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:40.978279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:40.978287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:40.978296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:40.990893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:40.991537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:40.991567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:40.991576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:40.991801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:40.992025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:40.992037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:40.992045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:40.992053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.004922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.005608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.005671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.005684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.005942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.006185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.006198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.006210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.006221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.018915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.019646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.019709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.019722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.019979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.020219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.020230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.020238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.020247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.032962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.033549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.033611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.033624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.033881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.034111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.034120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.034129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.034137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.046837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.047513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.047575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.047588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.047853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.048082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.048092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.048101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.048109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.060781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.061379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.061440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.061453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.061710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.061938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.061949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.061958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.061967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.074679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.075281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.075340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.075353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.075609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.075839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.075850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.075859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.075868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.088565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.089237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.089300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.089313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.089571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.089800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.089817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.089826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.041 [2024-11-20 17:12:41.089835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.041 [2024-11-20 17:12:41.102548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.041 [2024-11-20 17:12:41.103236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-11-20 17:12:41.103300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.041 [2024-11-20 17:12:41.103314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.041 [2024-11-20 17:12:41.103572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.041 [2024-11-20 17:12:41.103801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.041 [2024-11-20 17:12:41.103812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.041 [2024-11-20 17:12:41.103821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.103831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.116521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.117118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.117147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.117156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.117390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.117614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.117624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.117632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.117640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.130515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.131054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.131079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.131087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.131318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.131541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.131551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.131559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.131581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.144436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.145013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.145036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.145044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.145273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.145496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.145506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.145514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.145522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.158358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.158924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.158947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.158955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.159184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.159408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.159424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.159431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.159439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.172290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.172859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.172882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.172890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.173112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.173343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.173355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.173362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.173369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.186217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.186722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.186745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.186753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.186976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.187208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.187219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.187227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.187234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.042 [2024-11-20 17:12:41.200088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.042 [2024-11-20 17:12:41.200633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-11-20 17:12:41.200657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.042 [2024-11-20 17:12:41.200665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.042 [2024-11-20 17:12:41.200888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.042 [2024-11-20 17:12:41.201110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.042 [2024-11-20 17:12:41.201120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.042 [2024-11-20 17:12:41.201128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.042 [2024-11-20 17:12:41.201135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.214001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.214557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.214580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.214589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.214810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.215033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.215043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.215051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.215058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 9017.67 IOPS, 35.23 MiB/s [2024-11-20T16:12:41.481Z] [2024-11-20 17:12:41.227925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.228524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.228550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.228565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.228789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.229012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.229021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.229028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.229035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.241905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.242444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.242469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.242477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.242700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.242924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.242933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.242940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.242948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.255794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.256418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.256481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.256494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.256752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.256982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.256992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.257000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.257009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.269708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.270133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.270184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.270195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.270440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.270674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.270684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.270692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.270701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.283600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.284259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.284322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.284336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.284594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.284823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.284834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.284843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.284852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.297542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.298128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.298205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.298219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.298476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.298705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.298714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.298723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.298731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.305 [2024-11-20 17:12:41.311470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.305 [2024-11-20 17:12:41.312083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.305 [2024-11-20 17:12:41.312146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.305 [2024-11-20 17:12:41.312171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.305 [2024-11-20 17:12:41.312429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.305 [2024-11-20 17:12:41.312659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.305 [2024-11-20 17:12:41.312668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.305 [2024-11-20 17:12:41.312677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.305 [2024-11-20 17:12:41.312693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.325412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.325967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.325995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.326005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.326237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.326463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.326472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.326481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.326489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.339362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.339894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.339918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.339927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.340150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.340383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.340393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.340401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.340410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.353292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.353834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.353857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.353865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.354086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.354319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.354334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.354344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.354352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.367233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.367887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.367950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.367963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.368235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.368465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.368477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.368486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.368495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.381223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.381781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.381810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.381820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.382047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.382281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.382291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.382300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.382308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.395192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.395776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.395801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.395809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.396032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.396263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.396275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.396283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.396290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.409188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.409717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.409740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.409757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.409980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.410210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.410218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.410226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.410233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.423149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.423648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.423670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.423679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.423903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.424126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.424135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.424144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.424153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.437075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.437670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.437693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.437702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.437926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.438148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.438157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.438172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.306 [2024-11-20 17:12:41.438180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.306 [2024-11-20 17:12:41.451084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.306 [2024-11-20 17:12:41.451577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.306 [2024-11-20 17:12:41.451601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.306 [2024-11-20 17:12:41.451610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.306 [2024-11-20 17:12:41.451832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.306 [2024-11-20 17:12:41.452063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.306 [2024-11-20 17:12:41.452072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.306 [2024-11-20 17:12:41.452081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.307 [2024-11-20 17:12:41.452089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.307 [2024-11-20 17:12:41.465004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.307 [2024-11-20 17:12:41.465624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.307 [2024-11-20 17:12:41.465647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.307 [2024-11-20 17:12:41.465656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.307 [2024-11-20 17:12:41.465879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.307 [2024-11-20 17:12:41.466101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.307 [2024-11-20 17:12:41.466109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.307 [2024-11-20 17:12:41.466118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.307 [2024-11-20 17:12:41.466126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.478846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.479423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.479446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.479455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.479680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.570 [2024-11-20 17:12:41.479903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.570 [2024-11-20 17:12:41.479913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.570 [2024-11-20 17:12:41.479921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.570 [2024-11-20 17:12:41.479929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.492817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.493474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.493535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.493548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.493806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.570 [2024-11-20 17:12:41.494035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.570 [2024-11-20 17:12:41.494045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.570 [2024-11-20 17:12:41.494055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.570 [2024-11-20 17:12:41.494071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.505637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.506202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.506229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.506239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.506399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.570 [2024-11-20 17:12:41.506553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.570 [2024-11-20 17:12:41.506561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.570 [2024-11-20 17:12:41.506568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.570 [2024-11-20 17:12:41.506575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.518435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.518985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.519006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.519014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.519178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.570 [2024-11-20 17:12:41.519334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.570 [2024-11-20 17:12:41.519340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.570 [2024-11-20 17:12:41.519346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.570 [2024-11-20 17:12:41.519353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.531119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.531776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.531821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.531831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.532011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.570 [2024-11-20 17:12:41.532180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.570 [2024-11-20 17:12:41.532188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.570 [2024-11-20 17:12:41.532194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.570 [2024-11-20 17:12:41.532203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.543820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.544470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.544513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.544523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.544702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.570 [2024-11-20 17:12:41.544859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.570 [2024-11-20 17:12:41.544866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.570 [2024-11-20 17:12:41.544873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.570 [2024-11-20 17:12:41.544880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.570 [2024-11-20 17:12:41.556496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.570 [2024-11-20 17:12:41.557082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.570 [2024-11-20 17:12:41.557120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.570 [2024-11-20 17:12:41.557130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.570 [2024-11-20 17:12:41.557317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.557475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.557482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.557488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.557495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.569250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.569769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.569787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.569794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.569948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.570100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.570106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.570112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.570118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.581998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.582580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.582616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.582630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.582802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.582960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.582967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.582973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.582979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.594717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.595397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.595430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.595440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.595612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.595768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.595774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.595780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.595787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.607377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.607932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.607964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.607973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.608144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.608306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.608314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.608320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.608326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.620058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.620647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.620679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.620688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.620858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.621017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.621024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.621030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.621036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.632771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.633376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.633406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.633415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.633585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.633739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.633745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.633752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.633758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.645516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.646054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.646083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.646093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.646268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.646423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.646430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.646436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.646442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.658286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.658849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.658878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.658886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.659055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.659216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.659223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.659229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.659239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.670955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.671426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.671441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.671447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.571 [2024-11-20 17:12:41.671600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.571 [2024-11-20 17:12:41.671752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.571 [2024-11-20 17:12:41.671757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.571 [2024-11-20 17:12:41.671763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.571 [2024-11-20 17:12:41.671768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.571 [2024-11-20 17:12:41.683615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.571 [2024-11-20 17:12:41.684105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.571 [2024-11-20 17:12:41.684118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.571 [2024-11-20 17:12:41.684124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.572 [2024-11-20 17:12:41.684280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.572 [2024-11-20 17:12:41.684432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.572 [2024-11-20 17:12:41.684438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.572 [2024-11-20 17:12:41.684443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.572 [2024-11-20 17:12:41.684448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.572 [2024-11-20 17:12:41.696373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.572 [2024-11-20 17:12:41.696969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.572 [2024-11-20 17:12:41.696999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.572 [2024-11-20 17:12:41.697008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.572 [2024-11-20 17:12:41.697182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.572 [2024-11-20 17:12:41.697337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.572 [2024-11-20 17:12:41.697344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.572 [2024-11-20 17:12:41.697350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.572 [2024-11-20 17:12:41.697356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.572 [2024-11-20 17:12:41.709069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.572 [2024-11-20 17:12:41.709557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.572 [2024-11-20 17:12:41.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.572 [2024-11-20 17:12:41.709578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.572 [2024-11-20 17:12:41.709731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.572 [2024-11-20 17:12:41.709882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.572 [2024-11-20 17:12:41.709888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.572 [2024-11-20 17:12:41.709893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.572 [2024-11-20 17:12:41.709898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.572 [2024-11-20 17:12:41.721743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.572 [2024-11-20 17:12:41.722398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.572 [2024-11-20 17:12:41.722427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.572 [2024-11-20 17:12:41.722436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.572 [2024-11-20 17:12:41.722604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.572 [2024-11-20 17:12:41.722758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.572 [2024-11-20 17:12:41.722764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.572 [2024-11-20 17:12:41.722770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.572 [2024-11-20 17:12:41.722776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.572 [2024-11-20 17:12:41.734508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.572 [2024-11-20 17:12:41.734969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.572 [2024-11-20 17:12:41.734983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.572 [2024-11-20 17:12:41.734989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.572 [2024-11-20 17:12:41.735141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.572 [2024-11-20 17:12:41.735299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.572 [2024-11-20 17:12:41.735305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.572 [2024-11-20 17:12:41.735315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.572 [2024-11-20 17:12:41.735320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.833 [2024-11-20 17:12:41.747179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.833 [2024-11-20 17:12:41.747645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.833 [2024-11-20 17:12:41.747658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.833 [2024-11-20 17:12:41.747668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.833 [2024-11-20 17:12:41.747821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.833 [2024-11-20 17:12:41.747972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.833 [2024-11-20 17:12:41.747978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.747983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.747988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.759813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.760458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.760487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.760496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.760664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.760818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.760825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.760831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.760837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.772554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.773136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.773181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.773189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.773357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.773513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.773519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.773525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.773531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.785234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.785761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.785790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.785799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.785967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.786128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.786134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.786140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.786146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.797990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.798336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.798352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.798357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.798511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.798662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.798668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.798673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.798678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.810653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.830988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.831023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.831034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.831278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.831488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.831497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.831505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.831512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.844935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.845560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.845598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.845609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.845831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.846041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.846050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.846057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.846070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.857583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.858176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.858206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.858215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.858385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.858540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.858547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.858553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.858559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.870279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.870722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.870752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.834 [2024-11-20 17:12:41.870760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.834 [2024-11-20 17:12:41.870928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.834 [2024-11-20 17:12:41.871083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.834 [2024-11-20 17:12:41.871090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.834 [2024-11-20 17:12:41.871096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.834 [2024-11-20 17:12:41.871102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.834 [2024-11-20 17:12:41.882971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.834 [2024-11-20 17:12:41.883518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.834 [2024-11-20 17:12:41.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.883557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.883725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.883880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.883886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.883892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.883898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.895734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.896301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.896332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.896340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.896508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.896663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.896670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.896675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.896681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.908376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.908854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.908863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.909033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.909194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.909201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.909207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.909213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.921051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.921659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.921689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.921698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.921866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.922020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.922027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.922033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.922039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.933768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.934217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.934233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.934242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.934395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.934548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.934553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.934559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.934563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.946473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.946960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.946974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.946980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.947132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.947289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.947296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.947301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.947305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.959138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.959582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.959595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.959600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.959752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.959904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.959910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.959915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.959920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.971767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.972267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.972298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.972307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.972477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.972633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.972643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.972650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.972655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.984504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.985061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.835 [2024-11-20 17:12:41.985091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.835 [2024-11-20 17:12:41.985099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.835 [2024-11-20 17:12:41.985274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.835 [2024-11-20 17:12:41.985429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.835 [2024-11-20 17:12:41.985436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.835 [2024-11-20 17:12:41.985441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.835 [2024-11-20 17:12:41.985447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.835 [2024-11-20 17:12:41.997155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.835 [2024-11-20 17:12:41.997555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.836 [2024-11-20 17:12:41.997571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:49.836 [2024-11-20 17:12:41.997576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:49.836 [2024-11-20 17:12:41.997728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:49.836 [2024-11-20 17:12:41.997880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.836 [2024-11-20 17:12:41.997886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.836 [2024-11-20 17:12:41.997891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.836 [2024-11-20 17:12:41.997895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.096 [2024-11-20 17:12:42.009871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.096 [2024-11-20 17:12:42.010437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.096 [2024-11-20 17:12:42.010468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.096 [2024-11-20 17:12:42.010476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.096 [2024-11-20 17:12:42.010644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.096 [2024-11-20 17:12:42.010800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.096 [2024-11-20 17:12:42.010806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.096 [2024-11-20 17:12:42.010812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.096 [2024-11-20 17:12:42.010822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.096 [2024-11-20 17:12:42.022540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.096 [2024-11-20 17:12:42.023016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.096 [2024-11-20 17:12:42.023031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.096 [2024-11-20 17:12:42.023036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.096 [2024-11-20 17:12:42.023193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.096 [2024-11-20 17:12:42.023346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.023352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.023357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.023361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.035222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.035763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.035792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.035801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.035969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.036123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.036130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.036137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.036142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.047978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.048545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.048575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.048584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.048752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.048907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.048914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.048920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.048925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.060640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.061090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.061104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.061110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.061266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.061418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.061424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.061429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.061434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.073400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.073848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.073861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.073866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.074017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.074194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.074201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.074206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.074210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.086038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.086574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.086604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.086613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.086784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.086939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.086947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.086954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.086960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.098671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.099119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.099134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.099143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.099300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.099452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.099458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.099463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.099468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.111434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.111878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.111890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.111896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.112047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.112203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.112209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.112214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.112219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.124195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.124714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.124744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.124753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.124920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.125075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.125082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.125087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.097 [2024-11-20 17:12:42.125093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.097 [2024-11-20 17:12:42.136933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.097 [2024-11-20 17:12:42.137504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.097 [2024-11-20 17:12:42.137534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.097 [2024-11-20 17:12:42.137543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.097 [2024-11-20 17:12:42.137711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.097 [2024-11-20 17:12:42.137867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.097 [2024-11-20 17:12:42.137877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.097 [2024-11-20 17:12:42.137883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.137889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.149596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.150131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.150166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.150174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.150342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.150497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.150504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.150510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.150516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.162336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.162868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.162897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.162906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.163074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.163237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.163244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.163250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.163256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.175105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.175647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.175677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.175686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.175853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.176009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.176015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.176021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.176030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.187737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.188272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.188302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.188310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.188478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.188633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.188640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.188645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.188651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.200499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.201030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.201060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.201068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.201244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.201400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.201407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.201412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.201418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.213270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.213805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.213835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.213844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.214011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.214174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.214181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.214188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.214194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 6763.25 IOPS, 26.42 MiB/s [2024-11-20T16:12:42.274Z] [2024-11-20 17:12:42.226035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.226631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.226661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.226670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.226837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.226993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.226999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.227004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.227010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.238722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.239220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.239235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.239241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.239393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.239545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.239552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.239557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.239561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.251387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.251924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.251954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.251963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.252130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.252293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.098 [2024-11-20 17:12:42.252300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.098 [2024-11-20 17:12:42.252306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.098 [2024-11-20 17:12:42.252311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.098 [2024-11-20 17:12:42.264154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.098 [2024-11-20 17:12:42.264694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.098 [2024-11-20 17:12:42.264724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.098 [2024-11-20 17:12:42.264735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.098 [2024-11-20 17:12:42.264903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.098 [2024-11-20 17:12:42.265059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.099 [2024-11-20 17:12:42.265065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.099 [2024-11-20 17:12:42.265071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.099 [2024-11-20 17:12:42.265076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.359 [2024-11-20 17:12:42.276814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.359 [2024-11-20 17:12:42.277350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.359 [2024-11-20 17:12:42.277379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.359 [2024-11-20 17:12:42.277388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.359 [2024-11-20 17:12:42.277556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.359 [2024-11-20 17:12:42.277711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.359 [2024-11-20 17:12:42.277718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.359 [2024-11-20 17:12:42.277724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.277730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.289569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.290011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.290025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.290031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.290188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.290341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.290347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.290352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.290356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.302199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.302740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.302779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.302947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.303106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.303113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.303118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.303124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.314822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.315365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.315395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.315404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.315571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.315726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.315733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.315738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.315744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.327595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.328127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.328164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.328173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.328340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.328495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.328502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.328508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.328514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.340353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.340941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.340971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.340979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.341147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.341311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.341318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.341327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.341333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.353051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.353540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.353556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.353561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.353713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.353866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.353871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.353877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.353882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.365717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.366393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.366432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.366600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.366755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.366762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.366767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.366773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.378485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.378945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.360 [2024-11-20 17:12:42.378960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.360 [2024-11-20 17:12:42.378966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.360 [2024-11-20 17:12:42.379118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.360 [2024-11-20 17:12:42.379277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.360 [2024-11-20 17:12:42.379283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.360 [2024-11-20 17:12:42.379288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.360 [2024-11-20 17:12:42.379293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.360 [2024-11-20 17:12:42.391116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.360 [2024-11-20 17:12:42.391636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.391666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.391675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.391842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.391997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.392004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.392009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.392015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.403872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.404415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.404445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.404453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.404621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.404776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.404782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.404788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.404794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.416503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.417033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.417063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.417072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.417247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.417403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.417409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.417415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.417420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.429261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.429788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.429818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.429833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.430001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.430156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.430170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.430176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.430182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.441892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.442458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.442489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.442498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.442665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.442821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.442827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.442833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.442839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.454550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.455059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.455089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.455098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.455272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.455428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.455434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.455440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.455447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.467276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.467726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.467741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.467746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.467898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.468054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.468061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.468066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.468071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.479903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.480440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.480470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.480478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.480646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.361 [2024-11-20 17:12:42.480801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.361 [2024-11-20 17:12:42.480807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.361 [2024-11-20 17:12:42.480813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.361 [2024-11-20 17:12:42.480819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.361 [2024-11-20 17:12:42.492672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.361 [2024-11-20 17:12:42.493121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.361 [2024-11-20 17:12:42.493135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.361 [2024-11-20 17:12:42.493141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.361 [2024-11-20 17:12:42.493297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.362 [2024-11-20 17:12:42.493450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.362 [2024-11-20 17:12:42.493456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.362 [2024-11-20 17:12:42.493461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.362 [2024-11-20 17:12:42.493466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.362 [2024-11-20 17:12:42.505298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.362 [2024-11-20 17:12:42.505845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.362 [2024-11-20 17:12:42.505875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.362 [2024-11-20 17:12:42.505884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.362 [2024-11-20 17:12:42.506052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.362 [2024-11-20 17:12:42.506214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.362 [2024-11-20 17:12:42.506222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.362 [2024-11-20 17:12:42.506231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.362 [2024-11-20 17:12:42.506237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.362 [2024-11-20 17:12:42.517959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.362 [2024-11-20 17:12:42.518509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.362 [2024-11-20 17:12:42.518539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.362 [2024-11-20 17:12:42.518547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.362 [2024-11-20 17:12:42.518716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.362 [2024-11-20 17:12:42.518871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.362 [2024-11-20 17:12:42.518877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.362 [2024-11-20 17:12:42.518884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.362 [2024-11-20 17:12:42.518890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.362 [2024-11-20 17:12:42.530649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.362 [2024-11-20 17:12:42.531194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.362 [2024-11-20 17:12:42.531225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.362 [2024-11-20 17:12:42.531234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.362 [2024-11-20 17:12:42.531404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.362 [2024-11-20 17:12:42.531559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.362 [2024-11-20 17:12:42.531566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.362 [2024-11-20 17:12:42.531571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.362 [2024-11-20 17:12:42.531577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.623 [2024-11-20 17:12:42.543286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.623 [2024-11-20 17:12:42.543818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-11-20 17:12:42.543847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-11-20 17:12:42.543856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.623 [2024-11-20 17:12:42.544024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.623 [2024-11-20 17:12:42.544187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.623 [2024-11-20 17:12:42.544194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.623 [2024-11-20 17:12:42.544199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.623 [2024-11-20 17:12:42.544205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.623 [2024-11-20 17:12:42.555917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.623 [2024-11-20 17:12:42.556519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-11-20 17:12:42.556549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-11-20 17:12:42.556558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.623 [2024-11-20 17:12:42.556725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.623 [2024-11-20 17:12:42.556881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.623 [2024-11-20 17:12:42.556887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.623 [2024-11-20 17:12:42.556893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.623 [2024-11-20 17:12:42.556898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.623 [2024-11-20 17:12:42.568609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.623 [2024-11-20 17:12:42.569054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-11-20 17:12:42.569082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-11-20 17:12:42.569091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.623 [2024-11-20 17:12:42.569267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.623 [2024-11-20 17:12:42.569422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.623 [2024-11-20 17:12:42.569429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.623 [2024-11-20 17:12:42.569435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.623 [2024-11-20 17:12:42.569440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.623 [2024-11-20 17:12:42.581298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.623 [2024-11-20 17:12:42.581811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-11-20 17:12:42.581841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-11-20 17:12:42.581849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.623 [2024-11-20 17:12:42.582017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.623 [2024-11-20 17:12:42.582180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.623 [2024-11-20 17:12:42.582187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.623 [2024-11-20 17:12:42.582192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.623 [2024-11-20 17:12:42.582198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.623 [2024-11-20 17:12:42.594056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.623 [2024-11-20 17:12:42.594615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-11-20 17:12:42.594645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-11-20 17:12:42.594657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.623 [2024-11-20 17:12:42.594826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.623 [2024-11-20 17:12:42.594981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.623 [2024-11-20 17:12:42.594988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.623 [2024-11-20 17:12:42.594994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.623 [2024-11-20 17:12:42.595000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.623 [2024-11-20 17:12:42.606706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.623 [2024-11-20 17:12:42.607165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-11-20 17:12:42.607181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-11-20 17:12:42.607187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.623 [2024-11-20 17:12:42.607339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.607492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.607497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.607502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.607507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.619344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.619875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.619905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.619914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.620081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.620244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.620251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.620257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.620262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.632112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.632645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.632675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.632684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.632852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.633010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.633017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.633023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.633028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.644871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.645459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.645489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.645498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.645665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.645820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.645826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.645832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.645839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.657559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.658004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.658019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.658025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.658182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.658335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.658341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.658346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.658350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.670190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.670753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.670782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.670791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.670959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.671114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.671121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.671126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.671136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.682851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.683390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.683421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.683429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.683597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.683752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.683758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.683764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.683770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.695600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.696150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.696185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.696193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.696361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.696515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.696522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.696528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.696534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.708228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.708670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.708685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.708690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.708843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.708995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.624 [2024-11-20 17:12:42.709001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.624 [2024-11-20 17:12:42.709006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.624 [2024-11-20 17:12:42.709011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.624 [2024-11-20 17:12:42.720986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.624 [2024-11-20 17:12:42.721553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.624 [2024-11-20 17:12:42.721583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.624 [2024-11-20 17:12:42.721592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.624 [2024-11-20 17:12:42.721759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.624 [2024-11-20 17:12:42.721914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.625 [2024-11-20 17:12:42.721921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.625 [2024-11-20 17:12:42.721926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.625 [2024-11-20 17:12:42.721932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.625 [2024-11-20 17:12:42.733652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.625 [2024-11-20 17:12:42.734210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.625 [2024-11-20 17:12:42.734240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.625 [2024-11-20 17:12:42.734249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.625 [2024-11-20 17:12:42.734419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.625 [2024-11-20 17:12:42.734574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.625 [2024-11-20 17:12:42.734581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.625 [2024-11-20 17:12:42.734587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.625 [2024-11-20 17:12:42.734592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.625 [2024-11-20 17:12:42.746292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.625 [2024-11-20 17:12:42.746755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.625 [2024-11-20 17:12:42.746770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.625 [2024-11-20 17:12:42.746776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.625 [2024-11-20 17:12:42.746928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.625 [2024-11-20 17:12:42.747080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.625 [2024-11-20 17:12:42.747086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.625 [2024-11-20 17:12:42.747091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.625 [2024-11-20 17:12:42.747096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.625 [2024-11-20 17:12:42.758925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.625 [2024-11-20 17:12:42.759383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.625 [2024-11-20 17:12:42.759397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.625 [2024-11-20 17:12:42.759406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.625 [2024-11-20 17:12:42.759557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.625 [2024-11-20 17:12:42.759710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.625 [2024-11-20 17:12:42.759715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.625 [2024-11-20 17:12:42.759720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.625 [2024-11-20 17:12:42.759725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.625 [2024-11-20 17:12:42.771560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.625 [2024-11-20 17:12:42.772048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.625 [2024-11-20 17:12:42.772061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.625 [2024-11-20 17:12:42.772066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.625 [2024-11-20 17:12:42.772222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.625 [2024-11-20 17:12:42.772374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.625 [2024-11-20 17:12:42.772380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.625 [2024-11-20 17:12:42.772385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.625 [2024-11-20 17:12:42.772390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.625 [2024-11-20 17:12:42.784226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.625 [2024-11-20 17:12:42.784761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.625 [2024-11-20 17:12:42.784791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.625 [2024-11-20 17:12:42.784799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.625 [2024-11-20 17:12:42.784968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.625 [2024-11-20 17:12:42.785123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.625 [2024-11-20 17:12:42.785129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.625 [2024-11-20 17:12:42.785135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.625 [2024-11-20 17:12:42.785141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.887 [2024-11-20 17:12:42.796855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.887 [2024-11-20 17:12:42.797386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.887 [2024-11-20 17:12:42.797416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.887 [2024-11-20 17:12:42.797425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.887 [2024-11-20 17:12:42.797592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.887 [2024-11-20 17:12:42.797751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.887 [2024-11-20 17:12:42.797758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.887 [2024-11-20 17:12:42.797763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.887 [2024-11-20 17:12:42.797769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.887 [2024-11-20 17:12:42.809554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.887 [2024-11-20 17:12:42.810101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.887 [2024-11-20 17:12:42.810131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.887 [2024-11-20 17:12:42.810140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.810318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.810474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.810480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.810486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.810492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.822192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.822758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.822787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.822796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.822963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.823119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.823125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.823131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.823137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.834848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.835330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.835359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.835368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.835536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.835691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.835697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.835703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.835712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.847566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.848151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.848187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.848196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.848366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.848522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.848529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.848535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.848541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.860268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.860728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.860743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.860750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.860903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.861056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.861061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.861067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.861072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.872909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.873438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.873468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.873476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.873644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.873799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.873806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.873812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.873818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.885662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.886225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.886255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.886264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.886434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.886589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.886595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.886601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.886607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.898305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.898859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.898888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.898897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.899064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.899227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.899234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.899239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.899245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.910943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.888 [2024-11-20 17:12:42.911525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.888 [2024-11-20 17:12:42.911555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.888 [2024-11-20 17:12:42.911564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.888 [2024-11-20 17:12:42.911732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.888 [2024-11-20 17:12:42.911888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.888 [2024-11-20 17:12:42.911894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.888 [2024-11-20 17:12:42.911900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.888 [2024-11-20 17:12:42.911906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.888 [2024-11-20 17:12:42.923621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:42.924131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:42.924167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:42.924180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:42.924351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:42.924506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:42.924513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:42.924518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:42.924524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:42.936386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:42.936919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:42.936949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:42.936958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:42.937126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:42.937287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:42.937294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:42.937300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:42.937306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:42.949154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:42.949597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:42.949627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:42.949637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:42.949807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:42.949963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:42.949970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:42.949975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:42.949981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:42.961819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:42.962387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:42.962418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:42.962426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:42.962594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:42.962752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:42.962759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:42.962764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:42.962770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:42.974492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:42.974855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:42.974871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:42.974877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:42.975030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:42.975197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:42.975203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:42.975208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:42.975213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:42.987205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:42.987659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:42.987673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:42.987678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:42.987830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:42.987982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:42.987988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:42.987993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:42.987998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:42.999838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:43.000358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:43.000388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:43.000397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:43.000567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:43.000723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:43.000729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:43.000735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:43.000745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:43.012582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:43.013144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:43.013182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:43.013191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:43.013361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:43.013516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:43.013523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:43.013528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:43.013534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:43.025256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:43.025787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:43.025817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:43.025825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:43.025994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:43.026149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:43.026155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:43.026170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.889 [2024-11-20 17:12:43.026175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.889 [2024-11-20 17:12:43.037900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.889 [2024-11-20 17:12:43.038466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.889 [2024-11-20 17:12:43.038496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.889 [2024-11-20 17:12:43.038505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.889 [2024-11-20 17:12:43.038672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.889 [2024-11-20 17:12:43.038828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.889 [2024-11-20 17:12:43.038834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.889 [2024-11-20 17:12:43.038841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.890 [2024-11-20 17:12:43.038846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.890 [2024-11-20 17:12:43.050553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.890 [2024-11-20 17:12:43.051098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.890 [2024-11-20 17:12:43.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:50.890 [2024-11-20 17:12:43.051136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:50.890 [2024-11-20 17:12:43.051311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:50.890 [2024-11-20 17:12:43.051467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.890 [2024-11-20 17:12:43.051474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.890 [2024-11-20 17:12:43.051479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.890 [2024-11-20 17:12:43.051485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.151 [2024-11-20 17:12:43.063192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.151 [2024-11-20 17:12:43.063728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.151 [2024-11-20 17:12:43.063758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.151 [2024-11-20 17:12:43.063767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.151 [2024-11-20 17:12:43.063935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.151 [2024-11-20 17:12:43.064090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.151 [2024-11-20 17:12:43.064096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.151 [2024-11-20 17:12:43.064103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.151 [2024-11-20 17:12:43.064108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.151 [2024-11-20 17:12:43.075959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.151 [2024-11-20 17:12:43.076509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.151 [2024-11-20 17:12:43.076539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.151 [2024-11-20 17:12:43.076548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.151 [2024-11-20 17:12:43.076716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.151 [2024-11-20 17:12:43.076870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.151 [2024-11-20 17:12:43.076877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.151 [2024-11-20 17:12:43.076882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.151 [2024-11-20 17:12:43.076888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.151 [2024-11-20 17:12:43.088725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.151 [2024-11-20 17:12:43.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.151 [2024-11-20 17:12:43.089311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.151 [2024-11-20 17:12:43.089324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.151 [2024-11-20 17:12:43.089494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.151 [2024-11-20 17:12:43.089650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.151 [2024-11-20 17:12:43.089656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.151 [2024-11-20 17:12:43.089661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.151 [2024-11-20 17:12:43.089667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.151 [2024-11-20 17:12:43.101393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.151 [2024-11-20 17:12:43.101980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.151 [2024-11-20 17:12:43.102010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.151 [2024-11-20 17:12:43.102019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.151 [2024-11-20 17:12:43.102194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.151 [2024-11-20 17:12:43.102350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.151 [2024-11-20 17:12:43.102356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.151 [2024-11-20 17:12:43.102362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.151 [2024-11-20 17:12:43.102367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.151 [2024-11-20 17:12:43.114089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.151 [2024-11-20 17:12:43.114674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.151 [2024-11-20 17:12:43.114704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.151 [2024-11-20 17:12:43.114713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.151 [2024-11-20 17:12:43.114882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.151 [2024-11-20 17:12:43.115038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.151 [2024-11-20 17:12:43.115044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.151 [2024-11-20 17:12:43.115050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.151 [2024-11-20 17:12:43.115056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.151 [2024-11-20 17:12:43.126752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.151 [2024-11-20 17:12:43.127390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.151 [2024-11-20 17:12:43.127419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.151 [2024-11-20 17:12:43.127428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.151 [2024-11-20 17:12:43.127595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.127755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.127761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.127767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.127773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.139485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.140027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.140057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.140066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.140240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.140396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.140402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.140408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.140413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.152250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.152783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.152813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.152821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.152989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.153144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.153150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.153156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.153168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.165021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.165497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.165527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.165536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.165704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.165860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.165867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.165873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.165882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.177732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.178260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.178290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.178299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.178469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.178624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.178630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.178637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.178642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.190482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.191014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.191044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.191053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.191227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.191383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.191389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.191394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.191400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.203247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.203713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.203727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.203733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.203885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.204037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.204043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.204048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.204053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.215879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.216476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.216506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.216515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.216685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.216840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.216847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.216852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.216858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 5410.60 IOPS, 21.14 MiB/s [2024-11-20T16:12:43.328Z] [2024-11-20 17:12:43.228594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.229170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.229201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.229209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.229377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.229532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.229538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.229544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.229550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.241265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.241813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.241843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.241852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.152 [2024-11-20 17:12:43.242019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.152 [2024-11-20 17:12:43.242181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.152 [2024-11-20 17:12:43.242189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.152 [2024-11-20 17:12:43.242194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.152 [2024-11-20 17:12:43.242200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.152 [2024-11-20 17:12:43.254030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.152 [2024-11-20 17:12:43.254582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.152 [2024-11-20 17:12:43.254612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.152 [2024-11-20 17:12:43.254624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.153 [2024-11-20 17:12:43.254792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.153 [2024-11-20 17:12:43.254947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.153 [2024-11-20 17:12:43.254953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.153 [2024-11-20 17:12:43.254959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.153 [2024-11-20 17:12:43.254965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.153 [2024-11-20 17:12:43.266684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.153 [2024-11-20 17:12:43.267127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.153 [2024-11-20 17:12:43.267142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.153 [2024-11-20 17:12:43.267147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.153 [2024-11-20 17:12:43.267304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.153 [2024-11-20 17:12:43.267457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.153 [2024-11-20 17:12:43.267463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.153 [2024-11-20 17:12:43.267468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.153 [2024-11-20 17:12:43.267472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.153 [2024-11-20 17:12:43.279326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.153 [2024-11-20 17:12:43.279871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.153 [2024-11-20 17:12:43.279901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.153 [2024-11-20 17:12:43.279910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.153 [2024-11-20 17:12:43.280077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.153 [2024-11-20 17:12:43.280238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.153 [2024-11-20 17:12:43.280245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.153 [2024-11-20 17:12:43.280251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.153 [2024-11-20 17:12:43.280256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.153 [2024-11-20 17:12:43.291970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.153 [2024-11-20 17:12:43.292522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.153 [2024-11-20 17:12:43.292552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.153 [2024-11-20 17:12:43.292560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.153 [2024-11-20 17:12:43.292728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.153 [2024-11-20 17:12:43.292887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.153 [2024-11-20 17:12:43.292894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.153 [2024-11-20 17:12:43.292900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.153 [2024-11-20 17:12:43.292905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.153 [2024-11-20 17:12:43.304623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.153 [2024-11-20 17:12:43.305071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.153 [2024-11-20 17:12:43.305085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.153 [2024-11-20 17:12:43.305091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.153 [2024-11-20 17:12:43.305249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.153 [2024-11-20 17:12:43.305402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.153 [2024-11-20 17:12:43.305407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.153 [2024-11-20 17:12:43.305413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.153 [2024-11-20 17:12:43.305417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.153 [2024-11-20 17:12:43.317263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.153 [2024-11-20 17:12:43.317609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.153 [2024-11-20 17:12:43.317622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.153 [2024-11-20 17:12:43.317627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.153 [2024-11-20 17:12:43.317778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.153 [2024-11-20 17:12:43.317930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.153 [2024-11-20 17:12:43.317936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.153 [2024-11-20 17:12:43.317941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.153 [2024-11-20 17:12:43.317945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.414 [2024-11-20 17:12:43.329923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.414 [2024-11-20 17:12:43.330478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.414 [2024-11-20 17:12:43.330508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.414 [2024-11-20 17:12:43.330517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.414 [2024-11-20 17:12:43.330685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.414 [2024-11-20 17:12:43.330840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.414 [2024-11-20 17:12:43.330846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.414 [2024-11-20 17:12:43.330856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.414 [2024-11-20 17:12:43.330861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.414 [2024-11-20 17:12:43.342583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.414 [2024-11-20 17:12:43.342911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.414 [2024-11-20 17:12:43.342927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.414 [2024-11-20 17:12:43.342932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.414 [2024-11-20 17:12:43.343086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.414 [2024-11-20 17:12:43.343242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.414 [2024-11-20 17:12:43.343249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.414 [2024-11-20 17:12:43.343254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.414 [2024-11-20 17:12:43.343259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.414 [2024-11-20 17:12:43.355234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.414 [2024-11-20 17:12:43.355567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.414 [2024-11-20 17:12:43.355580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.414 [2024-11-20 17:12:43.355585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.414 [2024-11-20 17:12:43.355736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.414 [2024-11-20 17:12:43.355888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.414 [2024-11-20 17:12:43.355894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.414 [2024-11-20 17:12:43.355899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.414 [2024-11-20 17:12:43.355904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.414 [2024-11-20 17:12:43.367880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.414 [2024-11-20 17:12:43.368435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.414 [2024-11-20 17:12:43.368465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.414 [2024-11-20 17:12:43.368474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.414 [2024-11-20 17:12:43.368642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.414 [2024-11-20 17:12:43.368798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.414 [2024-11-20 17:12:43.368804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.414 [2024-11-20 17:12:43.368810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.414 [2024-11-20 17:12:43.368816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.414 [2024-11-20 17:12:43.380518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.414 [2024-11-20 17:12:43.380965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.414 [2024-11-20 17:12:43.380979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.414 [2024-11-20 17:12:43.380985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.414 [2024-11-20 17:12:43.381137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.381294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.381300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.381305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.381310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.393283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.393739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.393752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.393757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.393909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.394060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.394066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.394071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.394076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.406045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.406506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.406518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.406523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.406674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.406826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.406832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.406837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.406842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.418670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.419123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.419136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.419144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.419300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.419451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.419457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.419462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.419467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.431303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.431841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.431870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.431879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.432047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.432207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.432214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.432219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.432225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.444072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.444632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.444663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.444672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.444839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.444994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.445000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.445006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.445012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.456716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.457170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.457189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.457195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.457348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.457504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.457510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.457516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.457521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.469358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.469814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.469827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.469832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.469983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.470136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.470142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.470147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.470152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.481993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.482473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.482486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.482491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.482643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.482794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.482801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.482806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.482811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.494635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.495002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.495014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.495019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.415 [2024-11-20 17:12:43.495174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.415 [2024-11-20 17:12:43.495327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.415 [2024-11-20 17:12:43.495333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.415 [2024-11-20 17:12:43.495340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.415 [2024-11-20 17:12:43.495345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.415 [2024-11-20 17:12:43.507310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.415 [2024-11-20 17:12:43.507849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.415 [2024-11-20 17:12:43.507879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.415 [2024-11-20 17:12:43.507887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.508055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.508217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.508224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.508229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.508235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.416 [2024-11-20 17:12:43.519967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.416 [2024-11-20 17:12:43.520452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.416 [2024-11-20 17:12:43.520468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.416 [2024-11-20 17:12:43.520473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.520626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.520778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.520784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.520789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.520794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.416 [2024-11-20 17:12:43.532646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.416 [2024-11-20 17:12:43.533099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.416 [2024-11-20 17:12:43.533112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.416 [2024-11-20 17:12:43.533118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.533272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.533426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.533431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.533436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.533441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.416 [2024-11-20 17:12:43.545294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.416 [2024-11-20 17:12:43.545838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.416 [2024-11-20 17:12:43.545869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.416 [2024-11-20 17:12:43.545878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.546048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.546209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.546216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.546223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.546228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.416 [2024-11-20 17:12:43.558062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.416 [2024-11-20 17:12:43.558495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.416 [2024-11-20 17:12:43.558509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.416 [2024-11-20 17:12:43.558515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.558667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.558819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.558825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.558831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.558836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.416 [2024-11-20 17:12:43.570803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.416 [2024-11-20 17:12:43.571298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.416 [2024-11-20 17:12:43.571329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.416 [2024-11-20 17:12:43.571337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.571506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.571661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.571668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.571673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.571679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.416 [2024-11-20 17:12:43.583545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.416 [2024-11-20 17:12:43.584074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.416 [2024-11-20 17:12:43.584103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.416 [2024-11-20 17:12:43.584116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.416 [2024-11-20 17:12:43.584291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.416 [2024-11-20 17:12:43.584447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.416 [2024-11-20 17:12:43.584453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.416 [2024-11-20 17:12:43.584458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.416 [2024-11-20 17:12:43.584464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.677 [2024-11-20 17:12:43.596182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.677 [2024-11-20 17:12:43.596632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 17:12:43.596648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.677 [2024-11-20 17:12:43.596654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.677 [2024-11-20 17:12:43.596806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.677 [2024-11-20 17:12:43.596958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.677 [2024-11-20 17:12:43.596965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.677 [2024-11-20 17:12:43.596970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.677 [2024-11-20 17:12:43.596974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.677 [2024-11-20 17:12:43.608810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.677 [2024-11-20 17:12:43.609247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 17:12:43.609267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.677 [2024-11-20 17:12:43.609273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.677 [2024-11-20 17:12:43.609431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.677 [2024-11-20 17:12:43.609585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.677 [2024-11-20 17:12:43.609591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.677 [2024-11-20 17:12:43.609596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.677 [2024-11-20 17:12:43.609603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.677 [2024-11-20 17:12:43.621441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.677 [2024-11-20 17:12:43.621905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 17:12:43.621919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.677 [2024-11-20 17:12:43.621924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.677 [2024-11-20 17:12:43.622076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.677 [2024-11-20 17:12:43.622237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.677 [2024-11-20 17:12:43.622244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.677 [2024-11-20 17:12:43.622249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.677 [2024-11-20 17:12:43.622253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.677 [2024-11-20 17:12:43.634083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.677 [2024-11-20 17:12:43.634597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 17:12:43.634610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.677 [2024-11-20 17:12:43.634615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.677 [2024-11-20 17:12:43.634767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.677 [2024-11-20 17:12:43.634918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.677 [2024-11-20 17:12:43.634924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.677 [2024-11-20 17:12:43.634929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.677 [2024-11-20 17:12:43.634934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.677 [2024-11-20 17:12:43.646768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.677 [2024-11-20 17:12:43.647189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 17:12:43.647203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.677 [2024-11-20 17:12:43.647208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.677 [2024-11-20 17:12:43.647360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.677 [2024-11-20 17:12:43.647511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.677 [2024-11-20 17:12:43.647517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.677 [2024-11-20 17:12:43.647522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.677 [2024-11-20 17:12:43.647527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.677 [2024-11-20 17:12:43.659518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.659929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.659942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.659947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.660098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.660254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.660261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.660271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.660276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 [2024-11-20 17:12:43.672243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.672649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.672661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.672666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.672818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.672969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.672975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.672980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.672984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 [2024-11-20 17:12:43.684901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.685384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.685414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.685423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.685594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.685748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.685755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.685761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.685766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 [2024-11-20 17:12:43.697627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.698212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.698243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.698251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.698421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.698576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.698583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.698589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.698595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 [2024-11-20 17:12:43.710289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.710761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.710781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.710933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.711085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.711091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.711096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.711101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2146494 Killed "${NVMF_APP[@]}" "$@" 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.678 [2024-11-20 17:12:43.722941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.723320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.723325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.723477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.723629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.723635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.723640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.723645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2148131 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2148131 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2148131 ']' 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.678 17:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.678 [2024-11-20 17:12:43.735625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.736186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.736216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.736225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.736393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.736549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.736556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.736562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.736568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 [2024-11-20 17:12:43.748289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.748756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 17:12:43.748771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.678 [2024-11-20 17:12:43.748776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.678 [2024-11-20 17:12:43.748929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.678 [2024-11-20 17:12:43.749081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.678 [2024-11-20 17:12:43.749087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.678 [2024-11-20 17:12:43.749092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.678 [2024-11-20 17:12:43.749097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.678 [2024-11-20 17:12:43.760928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.678 [2024-11-20 17:12:43.761400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.761413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.761418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.761570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.761722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.761728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.761733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.761737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.679 [2024-11-20 17:12:43.773571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.679 [2024-11-20 17:12:43.774061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.774094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.774103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.774277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.774433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.774440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.774445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.774451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.679 [2024-11-20 17:12:43.780245] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:29:51.679 [2024-11-20 17:12:43.780292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.679 [2024-11-20 17:12:43.786325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.679 [2024-11-20 17:12:43.786772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.786787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.786793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.786945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.787097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.787104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.787109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.787115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.679 [2024-11-20 17:12:43.798965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.679 [2024-11-20 17:12:43.799384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.799397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.799403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.799555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.799707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.799713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.799718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.799723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.679 [2024-11-20 17:12:43.811718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.679 [2024-11-20 17:12:43.812264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.812298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.812307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.812477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.812632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.812639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.812644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.812650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.679 [2024-11-20 17:12:43.824374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.679 [2024-11-20 17:12:43.824924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.824954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.824963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.825131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.825293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.825300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.825306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.825312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.679 [2024-11-20 17:12:43.837114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.679 [2024-11-20 17:12:43.837615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 17:12:43.837644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.679 [2024-11-20 17:12:43.837653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.679 [2024-11-20 17:12:43.837821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.679 [2024-11-20 17:12:43.837976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.679 [2024-11-20 17:12:43.837985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.679 [2024-11-20 17:12:43.837991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.679 [2024-11-20 17:12:43.837997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.940 [2024-11-20 17:12:43.849861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.940 [2024-11-20 17:12:43.850198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-11-20 17:12:43.850220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.940 [2024-11-20 17:12:43.850227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.940 [2024-11-20 17:12:43.850389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.940 [2024-11-20 17:12:43.850543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.940 [2024-11-20 17:12:43.850549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.940 [2024-11-20 17:12:43.850554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.940 [2024-11-20 17:12:43.850559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.940 [2024-11-20 17:12:43.862548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.940 [2024-11-20 17:12:43.862971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-11-20 17:12:43.862985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.940 [2024-11-20 17:12:43.862991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.940 [2024-11-20 17:12:43.863143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.940 [2024-11-20 17:12:43.863300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.940 [2024-11-20 17:12:43.863307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.940 [2024-11-20 17:12:43.863312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.940 [2024-11-20 17:12:43.863317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.940 [2024-11-20 17:12:43.872625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.940 [2024-11-20 17:12:43.875291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.940 [2024-11-20 17:12:43.875851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-11-20 17:12:43.875882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.940 [2024-11-20 17:12:43.875891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.940 [2024-11-20 17:12:43.876059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.940 [2024-11-20 17:12:43.876230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.940 [2024-11-20 17:12:43.876238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.940 [2024-11-20 17:12:43.876244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.940 [2024-11-20 17:12:43.876250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.940 [2024-11-20 17:12:43.887970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.940 [2024-11-20 17:12:43.888557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-11-20 17:12:43.888588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.940 [2024-11-20 17:12:43.888596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.940 [2024-11-20 17:12:43.888765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.940 [2024-11-20 17:12:43.888924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.940 [2024-11-20 17:12:43.888930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.940 [2024-11-20 17:12:43.888936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.940 [2024-11-20 17:12:43.888942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.940 [2024-11-20 17:12:43.900661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.940 [2024-11-20 17:12:43.901116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-11-20 17:12:43.901146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.940 [2024-11-20 17:12:43.901155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.940 [2024-11-20 17:12:43.901333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.940 [2024-11-20 17:12:43.901488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.940 [2024-11-20 17:12:43.901495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.940 [2024-11-20 17:12:43.901501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.940 [2024-11-20 17:12:43.901507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.940 [2024-11-20 17:12:43.901693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.940 [2024-11-20 17:12:43.901714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.940 [2024-11-20 17:12:43.901721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.940 [2024-11-20 17:12:43.901726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.941 [2024-11-20 17:12:43.901731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.941 [2024-11-20 17:12:43.902930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.941 [2024-11-20 17:12:43.903081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.941 [2024-11-20 17:12:43.903083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.941 [2024-11-20 17:12:43.913366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.913964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.913974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.914143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.914306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.914313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.914319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.914325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:43.926033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.926531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.926562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.926571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.926740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.926895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.926902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.926909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.926914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:43.938776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.939261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.939291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.939300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.939468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.939624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.939630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.939636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.939642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:43.951498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.952072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.952103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.952112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.952289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.952445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.952452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.952458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.952463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:43.964153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.964688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.964718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.964727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.964899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.965055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.965061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.965068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.965074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:43.976791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.977380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.977410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.977419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.977588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.977743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.977749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.977755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.977761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:43.989482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:43.990049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:43.990079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:43.990088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:43.990263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:43.990419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:43.990425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:43.990431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:43.990436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:44.002144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:44.002718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:44.002749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:44.002758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:44.002926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:44.003082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:44.003092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:44.003098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:44.003103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:44.014813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:44.015444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:44.015474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:44.015483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:44.015652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.941 [2024-11-20 17:12:44.015807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.941 [2024-11-20 17:12:44.015815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.941 [2024-11-20 17:12:44.015821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.941 [2024-11-20 17:12:44.015827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.941 [2024-11-20 17:12:44.027520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.941 [2024-11-20 17:12:44.028070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-11-20 17:12:44.028100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.941 [2024-11-20 17:12:44.028108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.941 [2024-11-20 17:12:44.028284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.028440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.028446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.028452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.028459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.942 [2024-11-20 17:12:44.040174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.942 [2024-11-20 17:12:44.040727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-11-20 17:12:44.040757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.942 [2024-11-20 17:12:44.040766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.942 [2024-11-20 17:12:44.040935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.041090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.041097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.041102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.041112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.942 [2024-11-20 17:12:44.052814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.942 [2024-11-20 17:12:44.053144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-11-20 17:12:44.053163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.942 [2024-11-20 17:12:44.053169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.942 [2024-11-20 17:12:44.053321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.053474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.053479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.053484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.053490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.942 [2024-11-20 17:12:44.065485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.942 [2024-11-20 17:12:44.065961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-11-20 17:12:44.065974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.942 [2024-11-20 17:12:44.065979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.942 [2024-11-20 17:12:44.066131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.066289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.066296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.066301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.066306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.942 [2024-11-20 17:12:44.078128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.942 [2024-11-20 17:12:44.078727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-11-20 17:12:44.078757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.942 [2024-11-20 17:12:44.078766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.942 [2024-11-20 17:12:44.078934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.079089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.079095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.079101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.079107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.942 [2024-11-20 17:12:44.090807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.942 [2024-11-20 17:12:44.091249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-11-20 17:12:44.091279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.942 [2024-11-20 17:12:44.091288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.942 [2024-11-20 17:12:44.091456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.091611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.091618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.091624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.091629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.942 [2024-11-20 17:12:44.103472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.942 [2024-11-20 17:12:44.104068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-11-20 17:12:44.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:51.942 [2024-11-20 17:12:44.104107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:51.942 [2024-11-20 17:12:44.104283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:51.942 [2024-11-20 17:12:44.104438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.942 [2024-11-20 17:12:44.104445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.942 [2024-11-20 17:12:44.104450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.942 [2024-11-20 17:12:44.104456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.203 [2024-11-20 17:12:44.116139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.203 [2024-11-20 17:12:44.116751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.203 [2024-11-20 17:12:44.116781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.203 [2024-11-20 17:12:44.116790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.203 [2024-11-20 17:12:44.116958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.203 [2024-11-20 17:12:44.117113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.203 [2024-11-20 17:12:44.117120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.203 [2024-11-20 17:12:44.117126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.203 [2024-11-20 17:12:44.117131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.203 [2024-11-20 17:12:44.128842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.203 [2024-11-20 17:12:44.129496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.203 [2024-11-20 17:12:44.129526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.203 [2024-11-20 17:12:44.129535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.203 [2024-11-20 17:12:44.129708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.203 [2024-11-20 17:12:44.129863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.203 [2024-11-20 17:12:44.129870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.203 [2024-11-20 17:12:44.129875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.203 [2024-11-20 17:12:44.129881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.203 [2024-11-20 17:12:44.141572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.203 [2024-11-20 17:12:44.142143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.203 [2024-11-20 17:12:44.142178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.203 [2024-11-20 17:12:44.142187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.203 [2024-11-20 17:12:44.142355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.203 [2024-11-20 17:12:44.142510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.203 [2024-11-20 17:12:44.142516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.203 [2024-11-20 17:12:44.142522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.203 [2024-11-20 17:12:44.142528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.203 [2024-11-20 17:12:44.154213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.203 [2024-11-20 17:12:44.154763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.203 [2024-11-20 17:12:44.154793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.203 [2024-11-20 17:12:44.154802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.203 [2024-11-20 17:12:44.154970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.203 [2024-11-20 17:12:44.155125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.203 [2024-11-20 17:12:44.155131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.203 [2024-11-20 17:12:44.155137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.203 [2024-11-20 17:12:44.155142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.203 [2024-11-20 17:12:44.166858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.203 [2024-11-20 17:12:44.167307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.203 [2024-11-20 17:12:44.167337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.203 [2024-11-20 17:12:44.167346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.203 [2024-11-20 17:12:44.167514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.203 [2024-11-20 17:12:44.167669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.203 [2024-11-20 17:12:44.167680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.167686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.167692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.179560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.180103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.180133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.180142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.180318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.180473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.180480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.180485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.180491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.192192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.192783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.192813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.192822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.192990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.193145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.193151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.193157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.193169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.204872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.205279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.205309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.205318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.205488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.205643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.205650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.205656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.205665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.217525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.218070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.218099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.218108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.218282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.218438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.218444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.218450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.218456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 4508.83 IOPS, 17.61 MiB/s [2024-11-20T16:12:44.380Z] [2024-11-20 17:12:44.230290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.230838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.230868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.230877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.231045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.231207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.231214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.231220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.231225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.242929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.243510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.243540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.243549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.243719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.243874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.243880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.243886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.243891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.255596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.256059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.256089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.256097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.256271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.256427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.256433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.256439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.256445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.268292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.268808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.268837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.268846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.269014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.269175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.269182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.269188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.269193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.204 [2024-11-20 17:12:44.281027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.204 [2024-11-20 17:12:44.281499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.204 [2024-11-20 17:12:44.281513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.204 [2024-11-20 17:12:44.281519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.204 [2024-11-20 17:12:44.281671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.204 [2024-11-20 17:12:44.281823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.204 [2024-11-20 17:12:44.281829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.204 [2024-11-20 17:12:44.281834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.204 [2024-11-20 17:12:44.281839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.293662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.294142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.294155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.294165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.294324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.294476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.294482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.294487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.294492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.306332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.306889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.306919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.306928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.307095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.307256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.307264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.307269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.307274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.319107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.319538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.319553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.319558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.319711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.319863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.319868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.319873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.319878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.331866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.332420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.332450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.332459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.332627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.332782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.332792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.332798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.332804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.344519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.344888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.344902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.344908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.345060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.345217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.345224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.345229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.345234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.357203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.357725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.357755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.357763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.357931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.358086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.358093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.358098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.358104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.205 [2024-11-20 17:12:44.369940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.205 [2024-11-20 17:12:44.370544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.205 [2024-11-20 17:12:44.370575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.205 [2024-11-20 17:12:44.370583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.205 [2024-11-20 17:12:44.370753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.205 [2024-11-20 17:12:44.370908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.205 [2024-11-20 17:12:44.370915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.205 [2024-11-20 17:12:44.370921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.205 [2024-11-20 17:12:44.370930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.467 [2024-11-20 17:12:44.382631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.467 [2024-11-20 17:12:44.383093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.467 [2024-11-20 17:12:44.383108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.467 [2024-11-20 17:12:44.383115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.467 [2024-11-20 17:12:44.383272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.467 [2024-11-20 17:12:44.383426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.467 [2024-11-20 17:12:44.383431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.467 [2024-11-20 17:12:44.383437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.467 [2024-11-20 17:12:44.383442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.467 [2024-11-20 17:12:44.395278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.467 [2024-11-20 17:12:44.395825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.467 [2024-11-20 17:12:44.395854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.467 [2024-11-20 17:12:44.395863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.467 [2024-11-20 17:12:44.396031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.467 [2024-11-20 17:12:44.396191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.467 [2024-11-20 17:12:44.396198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.467 [2024-11-20 17:12:44.396204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.467 [2024-11-20 17:12:44.396210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.467 [2024-11-20 17:12:44.407916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.467 [2024-11-20 17:12:44.408475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.467 [2024-11-20 17:12:44.408506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.467 [2024-11-20 17:12:44.408515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.467 [2024-11-20 17:12:44.408683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.467 [2024-11-20 17:12:44.408838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.467 [2024-11-20 17:12:44.408844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.467 [2024-11-20 17:12:44.408850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.467 [2024-11-20 17:12:44.408856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.467 [2024-11-20 17:12:44.420563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.467 [2024-11-20 17:12:44.421111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.467 [2024-11-20 17:12:44.421141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.467 [2024-11-20 17:12:44.421150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.467 [2024-11-20 17:12:44.421323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.467 [2024-11-20 17:12:44.421479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.467 [2024-11-20 17:12:44.421485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.467 [2024-11-20 17:12:44.421491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.467 [2024-11-20 17:12:44.421497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.467 [2024-11-20 17:12:44.433207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.467 [2024-11-20 17:12:44.433786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.467 [2024-11-20 17:12:44.433816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.467 [2024-11-20 17:12:44.433825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.467 [2024-11-20 17:12:44.433993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.467 [2024-11-20 17:12:44.434148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.434154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.434166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.434172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.445894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.446122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.446142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.446149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.446312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.446468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.446473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.446479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.446484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.458603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.459131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.459166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.459176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.459350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.459505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.459512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.459518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.459523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.471360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.471909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.471938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.471947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.472115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.472278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.472286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.472291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.472297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.483993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.484448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.484478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.484487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.484655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.484811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.484818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.484823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.484829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.496660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.497217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.497247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.497256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.497424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.497580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.497592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.497598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.497603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.509298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.509843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.509873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.509882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.510050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.510211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.510218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.510223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.510229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.521963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.522560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.522590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.522599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.522767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.522922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.522929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.522934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.522940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.534657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.535110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.535140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.535149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.535327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.535483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.535490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.535496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.535505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.547367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.547829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.547844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.468 [2024-11-20 17:12:44.547849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.468 [2024-11-20 17:12:44.548001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.468 [2024-11-20 17:12:44.548154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.468 [2024-11-20 17:12:44.548164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.468 [2024-11-20 17:12:44.548170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.468 [2024-11-20 17:12:44.548175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.468 [2024-11-20 17:12:44.560018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.468 [2024-11-20 17:12:44.560491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.468 [2024-11-20 17:12:44.560506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.560511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.560664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.560816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.560822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.560826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.560831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.469 [2024-11-20 17:12:44.572736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.469 [2024-11-20 17:12:44.573199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.469 [2024-11-20 17:12:44.573219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.573224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.573381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.573534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.573540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.573546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.573551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.469 [2024-11-20 17:12:44.585395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.469 [2024-11-20 17:12:44.585848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.469 [2024-11-20 17:12:44.585862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.585867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.586019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.586176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.586182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.586187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.586191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.469 [2024-11-20 17:12:44.598162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.469 [2024-11-20 17:12:44.598694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.469 [2024-11-20 17:12:44.598725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.598735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.598904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.599059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.599066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.599071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.599077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.469 [2024-11-20 17:12:44.610796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.469 [2024-11-20 17:12:44.611258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.469 [2024-11-20 17:12:44.611288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.611296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.611466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.611622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.611628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.611634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.611640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.469 [2024-11-20 17:12:44.623434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.469 [2024-11-20 17:12:44.623489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.469 [2024-11-20 17:12:44.623957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.469 [2024-11-20 17:12:44.623987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.623995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.624171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.624327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.624334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.624341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.624347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.469 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.469 [2024-11-20 17:12:44.636215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.469 [2024-11-20 17:12:44.636752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.469 [2024-11-20 17:12:44.636783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.469 [2024-11-20 17:12:44.636792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.469 [2024-11-20 17:12:44.636961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.469 [2024-11-20 17:12:44.637116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.469 [2024-11-20 17:12:44.637123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.469 [2024-11-20 17:12:44.637129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.469 [2024-11-20 17:12:44.637136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.730 [2024-11-20 17:12:44.648857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.730 [2024-11-20 17:12:44.649434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.730 [2024-11-20 17:12:44.649464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.730 [2024-11-20 17:12:44.649474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.730 [2024-11-20 17:12:44.649641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.730 [2024-11-20 17:12:44.649800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.730 [2024-11-20 17:12:44.649807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.730 [2024-11-20 17:12:44.649813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.730 [2024-11-20 17:12:44.649818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.730 Malloc0 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.730 [2024-11-20 17:12:44.661532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.730 [2024-11-20 17:12:44.661869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.730 [2024-11-20 17:12:44.661884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.730 [2024-11-20 17:12:44.661890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.730 [2024-11-20 17:12:44.662042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.730 [2024-11-20 17:12:44.662198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.730 [2024-11-20 17:12:44.662205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.730 [2024-11-20 17:12:44.662210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.730 [2024-11-20 17:12:44.662215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.730 [2024-11-20 17:12:44.674204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.730 [2024-11-20 17:12:44.674531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.730 [2024-11-20 17:12:44.674544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.730 [2024-11-20 17:12:44.674550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.730 [2024-11-20 17:12:44.674702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.730 [2024-11-20 17:12:44.674854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.730 [2024-11-20 17:12:44.674859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.730 [2024-11-20 17:12:44.674864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.730 [2024-11-20 17:12:44.674869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.730 [2024-11-20 17:12:44.686839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.730 [2024-11-20 17:12:44.687386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.730 [2024-11-20 17:12:44.687416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234e000 with addr=10.0.0.2, port=4420 00:29:52.730 [2024-11-20 17:12:44.687424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234e000 is same with the state(6) to be set 00:29:52.730 [2024-11-20 17:12:44.687594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234e000 (9): Bad file descriptor 00:29:52.730 [2024-11-20 17:12:44.687749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:52.730 [2024-11-20 17:12:44.687755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:52.730 [2024-11-20 17:12:44.687761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:52.730 [2024-11-20 17:12:44.687767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:52.730 [2024-11-20 17:12:44.688723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.730 17:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2146875 00:29:52.730 [2024-11-20 17:12:44.699676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.730 [2024-11-20 17:12:44.727732] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:54.243 4768.57 IOPS, 18.63 MiB/s [2024-11-20T16:12:47.361Z] 5779.88 IOPS, 22.58 MiB/s [2024-11-20T16:12:48.301Z] 6558.00 IOPS, 25.62 MiB/s [2024-11-20T16:12:49.243Z] 7208.60 IOPS, 28.16 MiB/s [2024-11-20T16:12:50.631Z] 7723.55 IOPS, 30.17 MiB/s [2024-11-20T16:12:51.572Z] 8155.33 IOPS, 31.86 MiB/s [2024-11-20T16:12:52.513Z] 8515.54 IOPS, 33.26 MiB/s [2024-11-20T16:12:53.456Z] 8813.86 IOPS, 34.43 MiB/s [2024-11-20T16:12:53.456Z] 9085.47 IOPS, 35.49 MiB/s 00:30:01.280 Latency(us) 00:30:01.280 [2024-11-20T16:12:53.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.280 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:01.280 Verification LBA range: start 0x0 length 0x4000 00:30:01.280 Nvme1n1 : 15.01 9087.52 35.50 13226.47 0.00 5717.29 556.37 23374.51 00:30:01.280 [2024-11-20T16:12:53.456Z] =================================================================================================================== 00:30:01.280 [2024-11-20T16:12:53.456Z] Total : 9087.52 35.50 13226.47 0.00 5717.29 556.37 23374.51 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.280 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.281 rmmod nvme_tcp 00:30:01.281 rmmod nvme_fabrics 00:30:01.281 rmmod nvme_keyring 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2148131 ']' 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2148131 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2148131 ']' 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2148131 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.281 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148131 00:30:01.540 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148131' 00:30:01.541 killing process with pid 2148131 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2148131 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2148131 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.541 17:12:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:04.086 00:30:04.086 real 0m28.491s 00:30:04.086 user 1m4.077s 00:30:04.086 sys 0m7.743s 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.086 ************************************ 00:30:04.086 END TEST nvmf_bdevperf 00:30:04.086 ************************************ 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.086 ************************************ 00:30:04.086 START TEST nvmf_target_disconnect 00:30:04.086 ************************************ 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:04.086 * Looking for test storage... 00:30:04.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.086 --rc genhtml_branch_coverage=1 00:30:04.086 --rc genhtml_function_coverage=1 00:30:04.086 --rc genhtml_legend=1 00:30:04.086 --rc geninfo_all_blocks=1 00:30:04.086 --rc geninfo_unexecuted_blocks=1 00:30:04.086 00:30:04.086 ' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.086 --rc genhtml_branch_coverage=1 00:30:04.086 --rc genhtml_function_coverage=1 00:30:04.086 --rc genhtml_legend=1 00:30:04.086 --rc geninfo_all_blocks=1 00:30:04.086 --rc geninfo_unexecuted_blocks=1 00:30:04.086 00:30:04.086 ' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.086 --rc genhtml_branch_coverage=1 00:30:04.086 --rc genhtml_function_coverage=1 00:30:04.086 --rc genhtml_legend=1 00:30:04.086 --rc geninfo_all_blocks=1 00:30:04.086 --rc geninfo_unexecuted_blocks=1 00:30:04.086 00:30:04.086 ' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.086 --rc genhtml_branch_coverage=1 00:30:04.086 --rc genhtml_function_coverage=1 00:30:04.086 --rc genhtml_legend=1 00:30:04.086 --rc geninfo_all_blocks=1 00:30:04.086 --rc geninfo_unexecuted_blocks=1 00:30:04.086 00:30:04.086 ' 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:04.086 17:12:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.086 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:04.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.087 17:12:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:12.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:12.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:12.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.239 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:12.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:30:12.240 00:30:12.240 --- 10.0.0.2 ping statistics --- 00:30:12.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.240 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:30:12.240 00:30:12.240 --- 10.0.0.1 ping statistics --- 00:30:12.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.240 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:12.240 ************************************ 00:30:12.240 START TEST nvmf_target_disconnect_tc1 00:30:12.240 ************************************ 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.240 [2024-11-20 17:13:03.658410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.240 [2024-11-20 17:13:03.658513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fcad0 with addr=10.0.0.2, port=4420 00:30:12.240 [2024-11-20 17:13:03.658542] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:12.240 [2024-11-20 17:13:03.658557] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:12.240 [2024-11-20 17:13:03.658566] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:12.240 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:12.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:12.240 Initializing NVMe Controllers 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:12.240 00:30:12.240 real 0m0.144s 00:30:12.240 user 0m0.071s 00:30:12.240 sys 0m0.072s 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:12.240 ************************************ 00:30:12.240 END TEST nvmf_target_disconnect_tc1 00:30:12.240 ************************************ 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:12.240 ************************************ 00:30:12.240 START TEST nvmf_target_disconnect_tc2 00:30:12.240 ************************************ 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2154250 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2154250 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2154250 ']' 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.240 17:13:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.240 [2024-11-20 17:13:03.822979] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:30:12.240 [2024-11-20 17:13:03.823044] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.240 [2024-11-20 17:13:03.923765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.240 [2024-11-20 17:13:03.976076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.240 [2024-11-20 17:13:03.976130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.240 [2024-11-20 17:13:03.976139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.240 [2024-11-20 17:13:03.976146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.240 [2024-11-20 17:13:03.976152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.240 [2024-11-20 17:13:03.978526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:12.240 [2024-11-20 17:13:03.978686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:12.240 [2024-11-20 17:13:03.978847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:12.240 [2024-11-20 17:13:03.978848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:12.502 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.502 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:12.502 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.502 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.502 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 Malloc0 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 [2024-11-20 17:13:04.729519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 [2024-11-20 17:13:04.769882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2154297 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:12.765 17:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.689 17:13:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2154250 00:30:14.689 17:13:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 [2024-11-20 17:13:06.810013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Write completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.689 Read completed with error (sct=0, sc=8) 00:30:14.689 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Write completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Write completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Write completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Write completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Write completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 Read completed with error (sct=0, sc=8) 00:30:14.690 starting I/O failed 00:30:14.690 [2024-11-20 17:13:06.810403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.690 [2024-11-20 17:13:06.810721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.810750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.811071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.811084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.811461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.811524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.811839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.811854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.812243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.812285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.812667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.812679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.812905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.812917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.813151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.813173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.813587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.813599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.813911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.813922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.814118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.814129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.814419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.814750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.814762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.815097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.815109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.815434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.815446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.815785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.815798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.816189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.816206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.816529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.816547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.816871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.816882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.817213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.817225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.817685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.817697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.817905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.817918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.818178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.818190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.818477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.818489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.818817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.818828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.819182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.819499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.819511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.819864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.819877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.820238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.820251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.820571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.820583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.820939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.820951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.821167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-11-20 17:13:06.821179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-11-20 17:13:06.821505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.821516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.821737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.821748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.822096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.822109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.822420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.822433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.822626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.822639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.822985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.822997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.823329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.823340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.823596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.823608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.823952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.823963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.824294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.824307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.824648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.824661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.825013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.825025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.825369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.825381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.825575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.825586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.825787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.825799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.825883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.825894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.826168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.826180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.826512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.826523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.826718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.826730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.827038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.827049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.827456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.827469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.827809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.827820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.828040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.828051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.828407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.828420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.828773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.828787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.829143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.829161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.829459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.829471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.829767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.829778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.830099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.830110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.830439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.830451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.830768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.830781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.831126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.831137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.831454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.831466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.831789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.831800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.832025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.832037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.832358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.832371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.832674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.832685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-11-20 17:13:06.832924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-11-20 17:13:06.832935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.833264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.833274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.833578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.833590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.833897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.833907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.834306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.834317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.834627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.834637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.834949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.834959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.835152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.835170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.835511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.835522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.835755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.835765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.836090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.836100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.836335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.836652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.836662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.836974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.836984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.837312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.837323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.837550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.837878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.837890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.838210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.838221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.838552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.838565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.838879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.839134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.839146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.839463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.839476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.839784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.839798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.840132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.840145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.840551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.840565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.840789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.840802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.841139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.841152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.841561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.841574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.841903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.841921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.842236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.842250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.842588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.842603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.842963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.842976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.843287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.843301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.843622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.843634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.844026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.844039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.844383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.844397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.844690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.844706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.844911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-11-20 17:13:06.844925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-11-20 17:13:06.845204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.845217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.845537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.845550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.845871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.845890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.846256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.846269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.846601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.846614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.846966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.846980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.847291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.847304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.847652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.847665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.847979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.847992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.848308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.848322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.848659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.848673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.848994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.849008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.849220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.849234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.849577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.849590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.849777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.849791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.850119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.850132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.850481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.850495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.850819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.850833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.851152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.851176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.851489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.851508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.851854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.851872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.852204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.852224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.852573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.852590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.852913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.852930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.853302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.853319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.853660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.853676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.853894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.853912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.854233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.854251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.854601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.854618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.854840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.854859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.855212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.855572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.855590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.855836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.855856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.856190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.856216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.856344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.856361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.856699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.856716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.857033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.857057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-11-20 17:13:06.857292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-11-20 17:13:06.857310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.694 [2024-11-20 17:13:06.857648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-11-20 17:13:06.857667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-11-20 17:13:06.858032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-11-20 17:13:06.858049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-11-20 17:13:06.858363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-11-20 17:13:06.858381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-11-20 17:13:06.858744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-11-20 17:13:06.858761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.859084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.859106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.859478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.859496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.859817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.859838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.860156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.860195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.860566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.860585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.860910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.860927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.861245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.861263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.861596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.861615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.861945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.861962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.862287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.862304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.862621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.862638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.862945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.862963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.863300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.863318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.863650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.863672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.864013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.864033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.864423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.864447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.864796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.864818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.865179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.865202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.865544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.865566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.865921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.865944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.866311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.866654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.866675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.867025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.867047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.867392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.867413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.867757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.867777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.868129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.868150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.868539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.868561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.868937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.868958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-11-20 17:13:06.869266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-11-20 17:13:06.869288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.869626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.869649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.869965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.871622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.871677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.872051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.872075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.872428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.872454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.872806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.872829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.873184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.873207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.873562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.873584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.873804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.873825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.874203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.874227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.874564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.874586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.874910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.874931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.875257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.875279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.875662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.875691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.876124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.876154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.876562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.876593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.876952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.876982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.877391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.877422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.877771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.877799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.878038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.878067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.878481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.878511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.878883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.878912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.879289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.879319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.879685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.879713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.880081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.880539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.880572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.880925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.880964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.881319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.881351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.881710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.881738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.882188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.882219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.882568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.882597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.882981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.883012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.883289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.883319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.883691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.883720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.884157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.884212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.884549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.884579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.884846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-11-20 17:13:06.884877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-11-20 17:13:06.885226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.885259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.885522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.885551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.885904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.885933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.886299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.886328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.886704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.886732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.887101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.887129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.887396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.887426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.887781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.887810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.888224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.888258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.888646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.888674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.889048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.889084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.889419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.889449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.889804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.889833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.890197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.890226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.890509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.890543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.890928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.890956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.891304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.891334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.891746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.891775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.892126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.892155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.892563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.892594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.892943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.892972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.893410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.893441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.893774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.893803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.894056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.894086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.894428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.894458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.894811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.894840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.895073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.895102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.895457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.895488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.895850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.895879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.896246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.896284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.896620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.896657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.897026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.897055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.897410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.897440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.897794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.897822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.898192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.898222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.898576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.898612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.898957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.898986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.899357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-11-20 17:13:06.899387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-11-20 17:13:06.899744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.899772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.900153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.900208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.900543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.900573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.900945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.900977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.901349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.901380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.901682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.901713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.902087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.902115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.902403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.902434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.902779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.902807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.903134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.903176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.903438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.903468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.903812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.903841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.904214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.904252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.904625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.904655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.905012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.905040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.905388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.905420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.905773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.905802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.906175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.906206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.906592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.906621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.906984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.907012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.907383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.907413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.907837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.907867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.908230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.908263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.908631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.908661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.909018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.909046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.909388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.909420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.909661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.909690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.910047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.910078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.910439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.910470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.910717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.910745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.911107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.911135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.911503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.911540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.911898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.911929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.912304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.912336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.912706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.912735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.913007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.913040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.913288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.913319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-11-20 17:13:06.913652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-11-20 17:13:06.913681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.914028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.914056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.914476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.914508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.914741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.914773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.915130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.915171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.915558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.915586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.915933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.915962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.916321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.916353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.916724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.916753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.917127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.917177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.917542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.917572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.918005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.918033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.918404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.918435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.918788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.918817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.919207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.919613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.919642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.919893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.919923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.920299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.920329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.920699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.920728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.921092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.921122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.921504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.921537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.921891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.921921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.922282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.922312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.922664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.922693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.923053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.923081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.923453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.923486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.923833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.923864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.924228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.924259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.924506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.924537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.924794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.924823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.925173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.925218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-11-20 17:13:06.925511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-11-20 17:13:06.925850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.925880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.926234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.926267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.926665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.926701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.927042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.927071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.927438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.927468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.927887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.927918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.928260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.928291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.928657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.928686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.929047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.929075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.929447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.929476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.929866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.929896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.930122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.930150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.930534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.930564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.930922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.930950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.931314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.931344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.931598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.931627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.931968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.931998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.932342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.932373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.932632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.932659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.933021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.933050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.933394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.933425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.933784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.933813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.934194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.934237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.934574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.934605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.934949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.934978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.935233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.935263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.935634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.935663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.936029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.936057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.936426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.936457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.936827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.936858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.937100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.937129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.937513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.937544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.937904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.937934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.938197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.938245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.938604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.938636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.938971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.939002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.939345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.939377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.939741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-11-20 17:13:06.939771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-11-20 17:13:06.940133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.940185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.940449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.940478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.940888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.940916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.941283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.941314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.941568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.941970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.941999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.942370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.942401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.942770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.943150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.943203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.943566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.943595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.943839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.943868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.944114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.944144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.944537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.944568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.944924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.944954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.945274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.945305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.945671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.945700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.946055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.946085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.946517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.946548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.946898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.946927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.947303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.947337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.947705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.947984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.948013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.948384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.948417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.948756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.948787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.949205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.949554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.949583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.949952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.949982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.950300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.950331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.950686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.950716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.951071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.951100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.951470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.951510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.951884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.951914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.952285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.952317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.952673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.952702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.953070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.953100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.953464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.953494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.953841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.953871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.954222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-11-20 17:13:06.954253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-11-20 17:13:06.954524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.954552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.954897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.954928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.955241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.955273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.955625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.955655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.955993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.956023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.956388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.956418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.956775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.956811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.957178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.957211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.957558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.957588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.957828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.957856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.958241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.958273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.958617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.958647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.959006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.959036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.959414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.959448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.959821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.959851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.960222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.960255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.960587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.960616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.960973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.961002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.961358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.961389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.961750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.961779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.962127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.962156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.962408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.962442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.962796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.962825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.963198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.963243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.963514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.963878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.963910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.964269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.964302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.964659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.964689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.965051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.965080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.965448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.965479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.965835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.965865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.966228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.966260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.966496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.966534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.966888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.966917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.967347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.967377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.967732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.967762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.968128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-11-20 17:13:06.968172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-11-20 17:13:06.968538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.968569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.968916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.968945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.969307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.969338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.969693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.969722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.970114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.970498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.970529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.970894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.970927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.971311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.971342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.971698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.971728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.972092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.972134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.972527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.972559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.972931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.972962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.973316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.973726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.973756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.974095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.974124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.974569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.974600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.974957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.974987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.975335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.975371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.975731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.975761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.976088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.976118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.976505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.976538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.976828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.976857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.977122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.977152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.977598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.977631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.977935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.977970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.978210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.978254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.978667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.978697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.978944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.978973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.979261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.979293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.979571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.979600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.979841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.979871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.980217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.980257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.980619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.980647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.981014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.981043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.981411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.981443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.981795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.981827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.982079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.982199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.982515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-11-20 17:13:06.982549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-11-20 17:13:06.982902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.982934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.983407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.983518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.983874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.983913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.984182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.984215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.984615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.984646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.985005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.985034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.985400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.985431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.985795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.985826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.986189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.986221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.986478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.986515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.986916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.986947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.987297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.987329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.987692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.987722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.987948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.987977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.988329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.988360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.988714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.989133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.989175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.989567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.989598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.989951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.989983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.990352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.990383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.990757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.990788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.991146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.991197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.991583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.991613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.991968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.992001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.992262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.992296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.992683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.992720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.993059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.993088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.993428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.993459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.993815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.993846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.994206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.994236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.994632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.994663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.995013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.995043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.995394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.995426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-11-20 17:13:06.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-11-20 17:13:06.995797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.996157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.996197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.996578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.996608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.996842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.996869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.997219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.997251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.997613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.997642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.997998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.998030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.998413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.998443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.998885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.999238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.999268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:06.999827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:06.999856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.000224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.000255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.000602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.000632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.001009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.001038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.001426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.001457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.001792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.001820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.002172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.002203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.002586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.002965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.002993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.003341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.003378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.003734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.003763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.004125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.004153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.004384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.004413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.004752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.004781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.005034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.005062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.005393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.005424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.005713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.005741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.006099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.006129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.006488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.006520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.006889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.006919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.007273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.007303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.007688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.007717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.008079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.008107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.008473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.008503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.008861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.008890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.009320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.009350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.009712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.009739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.010100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.010130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.010527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.010556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-11-20 17:13:07.010823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-11-20 17:13:07.010851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.011202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.011233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.011509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.011537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.011803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.011831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.012224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.012255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.012642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.012671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.013031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.013060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.013425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.013463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.013820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.013850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.014230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.014261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.014660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.014691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.015062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.015091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.015447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.015726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.015755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.016109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.016138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.016535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.016897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.016927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.017284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.017315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.017712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.017742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.018094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.018124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.018476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.018507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.018946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.018982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.019319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.019350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.019709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.019739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.020097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.020127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.020518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.020549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.020914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.020944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.021286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.021317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.021651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.021681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.022050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.022080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.022435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.022823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.022852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.023221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.023251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.023504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.023537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.023915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.024286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.024316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.024681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.024712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.025058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.025087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.025425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-11-20 17:13:07.025454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-11-20 17:13:07.025719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.025747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.026181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.026212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.026565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.026595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.026962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.026998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.027334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.027365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.027727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.027978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.028007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.028330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.028361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.028733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.028763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.029117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.029147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.029525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.029557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.029900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.029929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.030289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.030320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.030684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.030714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.031081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.031111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.031507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.031539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.031894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.031922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.032292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.032323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.032671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.032699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.033070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.033098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.033462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.033494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.033859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.033888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.034235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.034265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.034557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.034587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.034949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.034977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.035318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.035350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.035732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.035762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.036018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.036049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.036396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.036426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.036821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.036851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.037225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.037255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.037709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.037737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.038071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.038101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.038554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.038585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.038932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.038961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.039332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.039362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.039721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.039752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.040118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.040150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-11-20 17:13:07.040512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-11-20 17:13:07.040542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.040909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.040939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.041286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.041325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.041695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.041724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.042084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.042113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.042372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.042405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.042755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.042785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.043142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.043180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.043547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.043576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.043937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.043973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.044385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.044415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.044738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.044767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.045100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.045137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.045521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.045551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.045792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.045824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.046190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.046221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.046617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.046646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.047016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.047045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.047304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.047338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.047597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.047628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.047988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.048017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.048427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.048457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.048808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.048838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.049248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.049278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.049621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.049651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.050021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.050050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.050433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.050463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.050804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.050832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.051183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.051213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.051568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.051598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.051975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.052012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.052253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.052286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.052697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.052726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.053080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.053109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.053473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.053503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.053887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-11-20 17:13:07.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-11-20 17:13:07.054293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.054669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.054698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.055057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.055088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.055433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.055463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.055821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.055851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.056222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.056252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.056612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.056642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.057003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.057031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.057386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.057416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.057768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.057796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.058182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.058214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.058470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.058502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.058860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.058890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.059268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.059299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.059651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.059681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.060043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.060072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.060436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.060466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.060890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.060922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.061271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.061303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.061651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.061681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.062018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.062047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.062407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.062807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.062837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.063101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.063130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.063541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.063573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.063803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.063837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.064198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.064603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.064641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.065008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.065040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.065406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.065437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.065788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.065818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.066234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.066265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.066629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.066659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.067010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.067040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.067391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.067422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.067789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.067819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-11-20 17:13:07.068064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-11-20 17:13:07.068092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.068478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.068510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.068867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.068896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.069323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.069356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.069713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.069742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.070105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.070137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.070497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.070533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.070870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.070899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.071268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.071306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.071714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.071744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.072000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.072032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.072383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.072414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.072827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.072858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.073200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.073232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.073631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.073660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.074022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.074052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.074388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.074420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.074766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.074797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.075172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.075204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.075554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.075583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.075964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.075993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.076379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.076412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.076765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.076795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.077153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.077193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.077473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.077503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.077908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.077937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.078276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.078308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.078684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.078714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.079075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.079104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.079547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.079579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.079945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.079976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.080344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.080374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.080725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.080756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.081110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.081141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.081487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.081517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.081881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.081911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.082289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.082319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.082670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.082700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.083031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-11-20 17:13:07.083063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-11-20 17:13:07.083449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.083478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.083834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.083862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.084206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.084237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.084622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.084650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.085018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.085048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.085302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.085352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.085645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.085676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.086024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.086054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.086397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.086428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.086804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.087191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.087230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.087624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.087654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.088014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.088043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.088422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.088454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.088821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.088850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.089209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.089241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.089610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.089640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.090027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.090056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.090402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.090434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.090808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.090838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.091212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.091243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.091583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.091613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.091958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.091990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.092349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.092380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.092728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.092759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.093141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.093181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.093543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.093572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.093918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.093949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.094322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.094355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.094738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.094769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.095120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.095152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.095499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.095529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.095902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.095930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.096296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.096329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.096737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.096769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.097153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.097205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.097454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.097489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.097904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-11-20 17:13:07.098236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-11-20 17:13:07.098270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.098608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.098640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.099006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.099037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.099424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.099455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.099878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.099909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.100278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.100311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.100655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.100685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.101103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.101133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.101582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.101615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.101950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.101980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.102340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.102371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.102737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.102766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.103013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.103044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.103421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.103454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.103704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.103735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.104090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.104120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.104537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.104569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.104932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.104962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.105340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.105370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.105806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.105839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.106185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.106214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.106546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.106575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.106935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.106964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.107329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.107358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.107708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.107740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.108099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.108130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.108486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.108519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.108877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.108908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.109282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.109316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.109695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.109724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.110084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.110115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.110497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.110531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.110864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.110894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.111251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.111284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.111638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.111668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.112000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.112031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.112390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.112424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.112771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.112801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.113059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-11-20 17:13:07.113087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-11-20 17:13:07.113474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.113506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.113859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.113903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.114298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.114330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.114681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.114711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.115064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.115096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.115453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.115483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.115819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.115849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.116205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.116236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.116634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.116663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.116876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.116904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.117290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.117322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.117679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.117709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.118055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.118085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.118462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.118495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.118864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.118895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.119271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.119303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.119550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.119585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.119934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.119967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.120364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.120733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.120763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.121188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.121219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.121582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.121612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.121967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.121997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.122441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.122472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.122716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.122748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.123102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.123131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.123451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.123483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.123843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.123872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.124231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.124271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.124634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.124665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-11-20 17:13:07.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-11-20 17:13:07.125054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.125384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.125417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.125784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.125813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.126062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.126095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.126481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.126517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.126796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.126824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.127178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.127210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.127583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.127615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.127991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.128021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.128269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.128300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:14.989 [2024-11-20 17:13:07.128657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.989 [2024-11-20 17:13:07.128687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:14.989 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.129036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.262 [2024-11-20 17:13:07.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.262 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.129476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.262 [2024-11-20 17:13:07.129509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.262 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.129867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.262 [2024-11-20 17:13:07.129898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.262 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.130259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.262 [2024-11-20 17:13:07.130290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.262 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.130657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.262 [2024-11-20 17:13:07.130686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.262 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.131051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.262 [2024-11-20 17:13:07.131094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.262 qpair failed and we were unable to recover it. 00:30:15.262 [2024-11-20 17:13:07.131578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.131636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.131955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.132010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.132457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.132517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.132840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.132897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.133269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.133316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.133695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.133734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.134085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.134115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.134506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.134537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.134895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.134925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.135332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.135363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.135760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.135789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.136169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.136202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.136535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.136566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.136968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.136996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.137250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.137281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.137673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.137702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.138062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.138090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.138438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.138470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.138834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.138864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.139203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.139233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.139594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.139623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.139982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.140011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.140387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.140423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.140784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.140814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.141187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.141218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.141570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.141599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.141953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.141982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.142247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.142277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.142612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.142641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.143012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.143042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.143330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.143359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.143739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.143768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.144128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.144170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.144532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.144560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.144809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.144838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.145202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.145247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.145651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.145683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.146056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.146086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.263 [2024-11-20 17:13:07.146427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.263 [2024-11-20 17:13:07.146458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.263 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.146824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.146851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.147112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.147140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.147549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.147579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.147930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.147958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.148305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.148344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.148720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.148749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.149097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.149124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.149493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.149523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.149898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.149927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.150376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.150405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.150767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.150803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.151154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.151192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.151521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.151551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.151907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.151935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.152377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.152407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.152757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.152788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.153078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.153107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.153500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.153532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.153944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.153974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.154328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.154357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.154710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.154738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.155098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.155127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.155465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.155495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.155848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.155876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.156277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.156308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.156665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.156693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.156970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.157000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.157364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.157397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.157752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.157780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.158144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.158191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.158560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.158589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.158946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.158976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.159350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.159380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.159768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.160140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.160182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.160586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.160615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.160977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.161006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.161344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.264 [2024-11-20 17:13:07.161374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.264 qpair failed and we were unable to recover it. 00:30:15.264 [2024-11-20 17:13:07.161745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.161775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.162136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.162176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.162508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.162536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.162904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.162932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.163272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.163301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.163639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.163669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.164055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.164083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.164432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.164462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.164827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.164856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.165221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.165251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.165622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.165653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.166003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.166033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.166388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.166419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.166766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.166800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.167150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.167191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.167526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.167556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.167921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.167950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.168312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.168341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.168700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.168728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.169090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.169118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.169526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.169556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.169904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.169933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.170295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.170325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.170602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.170631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.170976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.171004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.171386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.171745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.171772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.172129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.172157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.172460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.172488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.172865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.172896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.173223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.173254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.173549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.173576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.173888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.173917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.174289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.174319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.174699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.174727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.175085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.175114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.175478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.175873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.175901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.265 [2024-11-20 17:13:07.176256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.265 [2024-11-20 17:13:07.176288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.265 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.176660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.176688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.177045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.177075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.177442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.177472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.177846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.177874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.178240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.178271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.178416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.178449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.178715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.178744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.179118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.179147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.179494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.179524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.179866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.179895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.180256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.180286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.180536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.180564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.180918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.180945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.181314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.181344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.181706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.181734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.182114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.182143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.182426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.182456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.182802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.182830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.183197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.183227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.183577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.183607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.183974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.184003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.184383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.184421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.184782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.184811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.185181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.185211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.185631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.185660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.186003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.186034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.186408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.186438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.186698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.186726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.186966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.186994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.187346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.187376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.187740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.187770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.188119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.188148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.188518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.188548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.188914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.188943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.189309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.189340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.189687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.189715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.266 [2024-11-20 17:13:07.190083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.266 [2024-11-20 17:13:07.190111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.266 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.190478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.190507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.190875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.190905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.191354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.191386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.191721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.191751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.192112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.192140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.192483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.192520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.192873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.192904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.193280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.193310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.193574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.193605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.193993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.194392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.194422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.194871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.194901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.195256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.195288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.195541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.195570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.195802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.195834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.196213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.196243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.196617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.196647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.196997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.197025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.197427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.197458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.197800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.197831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.198074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.198102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.198437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.198467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.198823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.198851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.199221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.199253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.199598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.199628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.199965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.199993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.200348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.200379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.200547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.200575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.200899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.200927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.201290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.201320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.201674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.201702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-11-20 17:13:07.202071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-11-20 17:13:07.202099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.202497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.202527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.202905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.202936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.203303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.203333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.203715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.203745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.204185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.204215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.204570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.204600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.204951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.204979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.205341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.205371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.205731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.205760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.206027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.206055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.206452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.206483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.206840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.206869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.207235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.207265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.207613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.207641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.207960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.207996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.208319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.208350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.208722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.209106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.209136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.209509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.209538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.209915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.209944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.210323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.210353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.210716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.210746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.210993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.211021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.211403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.211434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.211801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.211830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.212182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.212211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.212565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.212594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.212999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.213347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.213377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.213720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.213749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.214014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.214041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.214413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.214442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.214813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.214842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.215217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.215248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.215614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.215644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.216009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.216037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.216390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.216778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.216807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-11-20 17:13:07.217183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-11-20 17:13:07.217212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.217567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.217595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.217960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.217991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.218344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.218379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.218741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.218770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.219141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.219195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.219576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.219950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.219982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.220344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.220374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.220634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.220667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.221031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.221060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.221401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.221432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.221792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.221820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.222061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.222090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.222472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.222502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.222846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.222876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.223243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.223273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.223643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.223672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.224040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.224069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.224444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.224475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.224823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.224854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.225218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.225247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.225622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.225649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.226012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.226041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.226291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.226321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.226686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.226715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.227059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.227088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.227425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.227455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.227815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.228186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.228217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.228583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.228611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.228963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.229352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.229382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.229740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.229768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.230020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.230048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.230461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.230492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.230831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.230860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.231239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.231269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.231631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-11-20 17:13:07.231661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-11-20 17:13:07.232021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.232050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.232415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.232444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.232794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.232826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.233188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.233218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.233577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.233607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.233974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.234022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.234415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.234444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.234802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.234831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.235193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.235222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.235602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.235633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.236066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.236094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.236439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.236470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.236827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.236856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.237220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.237249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.237501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.237529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.237881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.237910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.238280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.238310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.238669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.238697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.239053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.239081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.239420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.239451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.239787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.239814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.240184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.240215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.240645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.240674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.240960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.240988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.241337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.241367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.241747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.241777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.242124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.242153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.242520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.242551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.242911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.242941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.243309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.243340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.243675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.243704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.244066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.244094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.244554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.244592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.244928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.244960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.245327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.245357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.245721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.245749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.246092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.246121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.246519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-11-20 17:13:07.246983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-11-20 17:13:07.247012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.247260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.247292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.247660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.247689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.248046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.248074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.248458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.248791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.248819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.249184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.249215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.249579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.249608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.249858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.249887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.250231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.250260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.250519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.250547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.250953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.250982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.251324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.251354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.251732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.251761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.252135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.252177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.252553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.252581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.252902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.252932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.253307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.253337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.253703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.253732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.254088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.254118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.254479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.254509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.254876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.254904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.255256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.255288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.255657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.255686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.256049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.256078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.256430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.256461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.256804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.256832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.257191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.257222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.257583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.257618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.257981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.258011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.258334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.258365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.258743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.258771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.259139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.259179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.259535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.259566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.259924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.259953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.260312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.260348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-11-20 17:13:07.260702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-11-20 17:13:07.260731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.261129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.261510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.261542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.261898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.261926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.262204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.262234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.262610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.262641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.263003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.263031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.263451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.263481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.263837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.263866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.264241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.264271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.264642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.264671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.265039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.265068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.265423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.265453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.265816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.265845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.266205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.266479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.266507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.266854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.266886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.267335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.267364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.267723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.267752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.268105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.268133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.268511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.268540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.268892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.268920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.269257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.269288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.269658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.269688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.270017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.270046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.270423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.270763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.270797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.271150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.271191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.271508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.271547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.271884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.271913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.272278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.272308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.272569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.272598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.272990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.273018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.273427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.273457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-11-20 17:13:07.273807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-11-20 17:13:07.273838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.274204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.274234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.274637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.274665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.274917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.274945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.275374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.275713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.275742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.276100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.276131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.276520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.276550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.276891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.276921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.277332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.277362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.277727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.277755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.278136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.278177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.278543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.278573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.278928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.278958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.279319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.279349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.279585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.279614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.279976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.280005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.280323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.280353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.280727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.280756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.281097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.281126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.281503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.281534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.281887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.281916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.282290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.282320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.282680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.282708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.283079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.283108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.283458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.283489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.283904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.283933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.284301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.284332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.284698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.284728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.285088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.285118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.285566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.285597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.285935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.285965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.286327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.286357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.286724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.286758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.286994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.287022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.287402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.287431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.287788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.287819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.288190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-11-20 17:13:07.288220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-11-20 17:13:07.288623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.288652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.289009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.289037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.289421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.289451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.289823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.289852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.290218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.290252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.290619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.290647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.290897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.290925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.291266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.291297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.291665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.291693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.292059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.292090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.292443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.292475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.292833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.292862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.293262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.293623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.293652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.294020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.294050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.294290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.294323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.294677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.294705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.295070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.295099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.295463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.295493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.295843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.295870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.296269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.296299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.296638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.296668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.297050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.297079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.297423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.297454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.297802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.297830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.298237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.298667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.298696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.299044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.299073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.299443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.299474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.299832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.299862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.300236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.300266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.300611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.300640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.301068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.301098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.301473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.301503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.301887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.301915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.302282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.302312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.302558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.302587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.302954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.302985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-11-20 17:13:07.303354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-11-20 17:13:07.303384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.303611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.303639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.304021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.304049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.304426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.304456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.304815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.304845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.305204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.305234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.305539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.305891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.305922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.306275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.306305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.306667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.306695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.307052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.307080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.307413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.307444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.307822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.307852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.308216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.308247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.308631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.308661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.309094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.309122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.309530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.309560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.309926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.309955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.310313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.310342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.310709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.310738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.311090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.311122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.311293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.311325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.311699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.311729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.312073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.312105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.312468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.312499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.312839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.312886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.313276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.313310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.313700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.313731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.314089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.314119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.314477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.314509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.314868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.314898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.315265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.315294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.315647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.315678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.316060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.316091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.316393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.316428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.316664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.316693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.317025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.317056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.317421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.317453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.317773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-11-20 17:13:07.317803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-11-20 17:13:07.318061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.318090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.318477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.318511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.318859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.318888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.319276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.319307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.319659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.319691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.320118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.320147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.320540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.320570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.320925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.320957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.321232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.321265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.321516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.321545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.321908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.321937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.322280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.322311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.322685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.322716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.323003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.323032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.323286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.323319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.323702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.324117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.324147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.324445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.324476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.324869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.325243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.325275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.325665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.325696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.325924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.325954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.326364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.326395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.326774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.327172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.327205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.327572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.327604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.327971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.328000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.328368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.328401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.328766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.328798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.329144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.329189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.329524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.329555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.329916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.329948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-11-20 17:13:07.330310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-11-20 17:13:07.330344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.330616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.330647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.331012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.331043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.331298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.331330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.331685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.331718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.332072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.332102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.332468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.332501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.332731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.332764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.333128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.333157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.333543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.333578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.333933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.333966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.334424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.334457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.334789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.334820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.335113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.335143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.335530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.335562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.335982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.336350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.336379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.336735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.336766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.337179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.337582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.337611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.337986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.338016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.338352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.338382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.338643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.338679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.338963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.338992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.339357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.339389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.339741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.339773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.340241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.340271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.340644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.340673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.341024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.341055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.341466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.341497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.341916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.341947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.342232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.342262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.342536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.342567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.342833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.342860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.343112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.343144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.343550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.343582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.343943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.343977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.344324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.344357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-11-20 17:13:07.344740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-11-20 17:13:07.344769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.345139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.345181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.345576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.345606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.345952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.345984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.346286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.346320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.346682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.346710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.347133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.347175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.347539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.347568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.347928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.347959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.348262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.348293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.348700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.348730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.349062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.349090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.349458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.349839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.349868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.350240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.350270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.350656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.350689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.351045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.351076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.351445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.351475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.351895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.351925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.352306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.352339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.352683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.352714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.353066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.353097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.353647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.353677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.354038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.354067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.354439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.354469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.354855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.354893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.355245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.355278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.355561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.355592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.355869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.356252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.356282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.356697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.356726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.357058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.357086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.357470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.357502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.357851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.357883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.358135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.358177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.358550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.358579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.358959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.358990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-11-20 17:13:07.359380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-11-20 17:13:07.359414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.359692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.359722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.359986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.360018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.360412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.360442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.360737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.360766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.361122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.361153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.361541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.361570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.361928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.361959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.362208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.362243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.362655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.362684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.363036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.363070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.363455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.363487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.363850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.363880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.364143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.364187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.364569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.364597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.365006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.365354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.365385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.365725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.365754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.366122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.366150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.366558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.366588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.366837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.366868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.367241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.367273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.367620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.367649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.367896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.367928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.368287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.368318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.368653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.368681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.369032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.369061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.369407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.369437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.369811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.369839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.370201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.370232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.370641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.370669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.371025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.371054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.371426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.371734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.371762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.372110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.372139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.372538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.372568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.372927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.372956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.373372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.279 [2024-11-20 17:13:07.373689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.279 [2024-11-20 17:13:07.373717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.279 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.374077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.374105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.374486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.374516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.374751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.374782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.375134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.375192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.375551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.375580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.375941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.375969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.376319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.376349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.376707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.376735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.377088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.377117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.377513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.377543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.377914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.377942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.378298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.378327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.379093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.379121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.379384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.379415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.379780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.379810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.380195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.380225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.380569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.380605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.380865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.380897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.381155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.381194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.381561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.381590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.381963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.381991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.382336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.382365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.382707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.382735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.383095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.383123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.383495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.383526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.383917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.384283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.384313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.384687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.384716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.385072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.385100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.385443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.385473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.385832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.385861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.386227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.386257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.386635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.386663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.386933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.386961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.387321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.387351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.387729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.387759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.388118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.388147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.280 qpair failed and we were unable to recover it. 00:30:15.280 [2024-11-20 17:13:07.388495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.280 [2024-11-20 17:13:07.388524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.388881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.388910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.389177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.389208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.389628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.389656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.390029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.390058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.390425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.390455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.390823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.390864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.391284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.391314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.391648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.391676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.392043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.392071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.392420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.392449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.392791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.392820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.393188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.393217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.393599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.393627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.393979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.394007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.394384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.394415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.394776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.394804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.395143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.395541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.395570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.395966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.395996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.396361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.396392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.396763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.396792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.397151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.397190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.397518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.397904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.397933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.398293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.398323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.398584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.398612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.398962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.398991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.399373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.399403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.399756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.399785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.400141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.400179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.400515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.400545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.400914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.400942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.281 qpair failed and we were unable to recover it. 00:30:15.281 [2024-11-20 17:13:07.401300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.281 [2024-11-20 17:13:07.401330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.401687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.401716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.402086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.402114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.402590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.402621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.402968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.402998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.403359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.403390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.403745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.403773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.404141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.404178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.404525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.404554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.404802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.404829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.405182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.405212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.405575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.405604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.405953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.405981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.406352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.406382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.406743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.406777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.407126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.407156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.407608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.407637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.407967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.407996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.408349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.408378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.408737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.408767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.409109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.409138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.409482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.409512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.409869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.409897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.410116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.410143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.410557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.410586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.410962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.410991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.411363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.411393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.411663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.411691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.412073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.412102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.412470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.412500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.412858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.412889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.413246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.413277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.413636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.413665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.414018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.414047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.414435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.414464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.414821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.414850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.415215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.415245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.415614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.415642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.416070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.282 [2024-11-20 17:13:07.416099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.282 qpair failed and we were unable to recover it. 00:30:15.282 [2024-11-20 17:13:07.416442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.416473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.416840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.416868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.417240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.417276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.417643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.417674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.418035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.418063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.418415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.418445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.418798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.418826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.419183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.419212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.419569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.419597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.419898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.419927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.420324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.420699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.420728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.421096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.421125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.421532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.421562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.421899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.421929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.422178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.422213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.422576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.422604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.422976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.423005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.283 [2024-11-20 17:13:07.423366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.283 [2024-11-20 17:13:07.423398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.283 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.423762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.423793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.424174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.424208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.424554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.424584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.424931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.424960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.425316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.425346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.425713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.425741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.426096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.426125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.426483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.426513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.426884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.426914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.427273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.427304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.427679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.427707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.428066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.428094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.428454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.428484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.428749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.428779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.429126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.429155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.556 [2024-11-20 17:13:07.429494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.556 [2024-11-20 17:13:07.429522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.556 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.429890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.429919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.430277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.430307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.430669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.430697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.431066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.431094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.431477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.431507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.431872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.431900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.432265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.432294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.432648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.432677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.432974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.433008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.433369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.433399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.433762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.433790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.434150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.434187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.434480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.434509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.434870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.434900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.435262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.435293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.435649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.435678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.436038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.436066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.436417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.436447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.436812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.436841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.437196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.437233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.437569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.437598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.437955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.437984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.438351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.438381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.438742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.438771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.439122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.439151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.439487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.439517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.439872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.439900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.440361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.440391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.440749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.440777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.441133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.441169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.441538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.441940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.442276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.442306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.442675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.442702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.443063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.443091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.443500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.443530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.443861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.443890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.444249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.444279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.444631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.444659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.445023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.445051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.445403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.445433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.445793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.445822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.446202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.446232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.446610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.446647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.447007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.447035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.447387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.447417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.447791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.447820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.448154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.448208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.448582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.448610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.448909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.448938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.449292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.449322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.449687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.449716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.450068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.450097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.450466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.450496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.450854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.450882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.451230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.451261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.451641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.451669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.452046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.452074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.452408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.452440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.452782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.452810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.557 [2024-11-20 17:13:07.453177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.557 [2024-11-20 17:13:07.453207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.557 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.453561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.453588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.453953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.453981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.454327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.454358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.454731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.454768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.455130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.455173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.455537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.455567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.455885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.455915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.456280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.456310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.456678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.456706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.457056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.457085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.457455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.457484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.457842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.457870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.458236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.458266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.458610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.458638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.459000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.459027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.459401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.459438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.459792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.459821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.460186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.460215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.460481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.460509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.460949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.460977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.461317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.461346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.461699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.461730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.462094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.462123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.462535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.462565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.462925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.462953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.463309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.463339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.463697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.463727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.464083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.464113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.464476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.464505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.464866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.464896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.465259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.465289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.465651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.465679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.466026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.466062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.466412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.466443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.466860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.466888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.467238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.467268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.467611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.467642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.468053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.468081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.468423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.468452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.468806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.468834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.469186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.469215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.469529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.469557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.469922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.469950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.470289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.470320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.470719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.470747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.471104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.471131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.471474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.471504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.471862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.471891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.472272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.472638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.472666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.473026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.473055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.473435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.473464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.473815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.473844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.474222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.474251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.474614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.474641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.475008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.475036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.475470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.475506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.475839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.475868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.476271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.476301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.476655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.476683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.477059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.477087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.477491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.477521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.477865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.558 [2024-11-20 17:13:07.477894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.558 qpair failed and we were unable to recover it. 00:30:15.558 [2024-11-20 17:13:07.478238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.478267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.478636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.478664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.479020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.479049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.479420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.479450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.479813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.479841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.480192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.480221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.480509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.480538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.480897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.480925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.481184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.481214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.481591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.481622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.481874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.481902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.482287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.482317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.482660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.482690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.483050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.483078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.483322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.483355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.483708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.483737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.484095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.484123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.484518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.484879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.484908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.485267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.485297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.485649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.485686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.486053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.486082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.486471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.486501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.486859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.486887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.487262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.487292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.487693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.487723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.487964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.487995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.488381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.488411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.488831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.488860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.489198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.489229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.489624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.489652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.490021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.490050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.490389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.490420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.490794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.490823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.491097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.491126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.491521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.491552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.491915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.491945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.492322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.492352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.492716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.492744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.493114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.493144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.493513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.493542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.493895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.493923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.494308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.494338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.494742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.494770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.495097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.495125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.495501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.495531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.495833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.495862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.496228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.496257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.496513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.496888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.496917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.497275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.497306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.497662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.497691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.497995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.498023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.498383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.498412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.498768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.498797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.499204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.499236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.499602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.499632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.559 qpair failed and we were unable to recover it. 00:30:15.559 [2024-11-20 17:13:07.499996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.559 [2024-11-20 17:13:07.500025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.500390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.500420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.500787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.500816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.501178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.501209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.501561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.501595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.501950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.501979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.502266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.502296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.502679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.502708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.503085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.503114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.503485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.503515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.503868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.503897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.504187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.504217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.504566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.504597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.504926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.504955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.505295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.505326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.505702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.505730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.506099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.506127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.506489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.506521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.506854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.506882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.507241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.507272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.507631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.507660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.508016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.508045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.508403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.508433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.508789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.508817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.509179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.509210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.509575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.509603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.509970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.509998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.510353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.510382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.510743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.510771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.511130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.511180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.511539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.511569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.511831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.511868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.512231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.512262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.512677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.512705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.513047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.513076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.513319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.513349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.513698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.513728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.514124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.514501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.515091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.515130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.515571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.515960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.515990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.516370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.516400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.516756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.516784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.517137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.517175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.517552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.517581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.517925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.517955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.518318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.518719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.518747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.519104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.519131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.519504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.519534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.519961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.519989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.520332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.520362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.520723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.560 [2024-11-20 17:13:07.520757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.560 qpair failed and we were unable to recover it. 00:30:15.560 [2024-11-20 17:13:07.521090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.521118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.521500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.521530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.521894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.521923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.522285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.522315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.522658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.522688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.523047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.523075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.523417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.523448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.523816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.523845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.524200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.524232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.524585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.524623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.524985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.525014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.525366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.525396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.525762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.525790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.526176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.526205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.526565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.526594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.526958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.526986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.527347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.527376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.527731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.527758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.528012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.528046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.528424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.528454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.528815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.528843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.529210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.529240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.529567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.529595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.529958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.529987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.530303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.530333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.530733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.530760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.531111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.531140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.531505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.531906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.531935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.532296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.532327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.532669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.532699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.533059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.533087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.533453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.533483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.533828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.533858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.534228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.534258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.534620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.534648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.535030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.535419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.535450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.535807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.535835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.536197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.536226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.536587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.536615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.536960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.536989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.537365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.537395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.537755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.537783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.538071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.538099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.538470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.538861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.538889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.539251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.539281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.539646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.539676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.540051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.540079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.540333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.540363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.540708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.540736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.541101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.541129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.541536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.541923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.541952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.542337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.542367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.542765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.542795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.543216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.543246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.543492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.543523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.543909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.543938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.544298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.544329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.544680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.544707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.545068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.545097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.545430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-11-20 17:13:07.545460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-11-20 17:13:07.545823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.545851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.546228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.546257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.546641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.547004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.547032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.547399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.547428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.547806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.547835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.548271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.548302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.548623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.548652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.548914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.548942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.549307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.549336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.549695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.549725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.550147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.550199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.550601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.550630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.550988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.551017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.551355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.551383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.551747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.551776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.552133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.552179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.552545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.552574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.552944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.552972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.553332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.553364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.553726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.553755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.554138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.554174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.554539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.554573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.554931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.554959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.555325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.555355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.555718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.555746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.556115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.556142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.556533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.556562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.556920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.556948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.557215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.557244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.557636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.557665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.558024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.558054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.558396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.558424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.558795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.558824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.559178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.559209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.559479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.559507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.559862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.559891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.560246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.560283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.560563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.560590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.560953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.560983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.561319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.561351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.561730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.561759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.562137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-11-20 17:13:07.562176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-11-20 17:13:07.562525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.562554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.562925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.562954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.563303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.563335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.563687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.563716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.564103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.564545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.564577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.564934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.564962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.565351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.565382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.565742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.565773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.566565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.566596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.566837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.566865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.567238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.567267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.567595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.567625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.567976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.568005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.568339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.568369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.568700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.568729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.569099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.569129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.569517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.569546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.569926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.569958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.570361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.570750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.570778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.571135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.571174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.571583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.571613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.571992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.572020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.572402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.572433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.572797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.572826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.573182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.573214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.573578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.573606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.574036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.574065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.574430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.574461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.574821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.574852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.575108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.575137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.575517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.575930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.575960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.576331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.576360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.576728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.576756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.577054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.577082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.577483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.577515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.577854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.578256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.578287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.578635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.578674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.579036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.579067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.579407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.579437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.579809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.579838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.580206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.580239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.580618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.580985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.581020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.581422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.581455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.581794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.581823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.582198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.582229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.582562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.582592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.582964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.582992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-11-20 17:13:07.583229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-11-20 17:13:07.583263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.583612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.583641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.583979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.584009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.584381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.584412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.584747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.584774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.585136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.585195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.585546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.585577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.585934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.585963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.586267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.586299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.586676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.586705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.587063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.587091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.587469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.587501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.587845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.587876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.588214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.588243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.588591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.588620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.588989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.589020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.589380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.589411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.589766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.589796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.590155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.590202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.590609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.590640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.591006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.591035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.591386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.591419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.591704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.591736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.592103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.592133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.592478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.592514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.592869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.593151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.593207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.593587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.593616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.593971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.594000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.594384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.594416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.594716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.594747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.595094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.595123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.595427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.595457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.595868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.595899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.596236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.596265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.596618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.596647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.596999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.597028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.597406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.597436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.597829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.597858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.598302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.598334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.598685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.598714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.598968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.598999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.599416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.599447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.599897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.599928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.600275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.600306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.600682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.600711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.601077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.601109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.601610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.601642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.601988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.602018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.602380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-11-20 17:13:07.602413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-11-20 17:13:07.602767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.602796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.603171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.603202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.603563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.603595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.603947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.603978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.604351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.604381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.604647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.604678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.605017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.605052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.605411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.605444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.605799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.605828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.606194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.606224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.606486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.606522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.606904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.606938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.607301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.607337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.607688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.607719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.608102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.608134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.608427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.608458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.608803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.608835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.609207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.609240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.609595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.609624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.610062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.610091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.610435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.610465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.610723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.610752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.611078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.611107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.611474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.611507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.611942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.612289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.612321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.612695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.612727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.613069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.613100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.613473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.613506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.613886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.613917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.614282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.614313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.614665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.614693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.615099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.615129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.615483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.615514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.615873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.615903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.616278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.616309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.616657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.616689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.617034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.617065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.617384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.617416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.617780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.617810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.618181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.618213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.618565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.618593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.618989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.619018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.619374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.619405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.619763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.619791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.620147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.620188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.620481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.620509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.620857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.620886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.621190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.621222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.621587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.621616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.621977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.622005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.622363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.622393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.622773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.622801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-11-20 17:13:07.623174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-11-20 17:13:07.623211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.623587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.623616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.623984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.624012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.624394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.624424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.624782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.624811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.625181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.625213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.625511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.625540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.625943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.626312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.626343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.626698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.626727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.627096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.627126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.627534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.627565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.627959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.627987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.628342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.628381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.628743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.628772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.629141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.629193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.629626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.629655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.630009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.630036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.630407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.630436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.630768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.630796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.631180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.631210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.631464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.631493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.631836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.631864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.632147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.632188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.632543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.632572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.632940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.632970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.633328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.633358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.633734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.633770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.634140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.634220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.634585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.634614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.634978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.635007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.635385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.635417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.635787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.635815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.636197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.636227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.636487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.636518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.636872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.636900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.637261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.637293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.637651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.637679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.637985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.638016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.638465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.638495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.638850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.638879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.639224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.639255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.639599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.639628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.639979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.640007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-11-20 17:13:07.640366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-11-20 17:13:07.640395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.640742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.640771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.641028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.641061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.641410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.641440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.641795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.641824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.642186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.642216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.642571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.642599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.642965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.642993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.643353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.643384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.643739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.643769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.644123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.644151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.644509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.644539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.644907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.644935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.645302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.645333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.645700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.645728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.646066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.646094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.646456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.646486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.646848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.646876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.647306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.647336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.647673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.647702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.648075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.648103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.648459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.648489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.648848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.648877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.649269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.649299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.649636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.649671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.650034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.650064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.650436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.650466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.650762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.650790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.651185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.651216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.651479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.651511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.651887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.651915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.652275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.652309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.652670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.652701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.653038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.653068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.653440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.653475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.653845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.653874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.654248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.654278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.654647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.654675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.655033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.655061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.655429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.655459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.655785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.655814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.656182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.656213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.656559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.656589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.656909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.656938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.657298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.657329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.657589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.657618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.658004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.658390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.658420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.658774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.659172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.659203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.659598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-11-20 17:13:07.659627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-11-20 17:13:07.659991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.660025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.660381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.660412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.660770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.660799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.661180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.661209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.661445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.661477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.661835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.661863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.662227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.662258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.662529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.662558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.662913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.662942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.663309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.663338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.663691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.663720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.664053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.664082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.664421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.664450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.664808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.664837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.665195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.665227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.665561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.665589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.665953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.665981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.666343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.666375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.666717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.666744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.667109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.667139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.667478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.667508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.667832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.667860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.668211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.668242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.668608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.668637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.669010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.669042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.669408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.669831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.669862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.670225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.670256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.670620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.670649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.671021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.671051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.671382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.671412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.671831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.671860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.672217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.672248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.672578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.672606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.672968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.673000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.673357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.673393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.673764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.673793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.674155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.674199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.674502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.674530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.674776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.674808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.675192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.675223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.675629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.675666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.676023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.676056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.676402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.676435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.676795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.676824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-11-20 17:13:07.677193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-11-20 17:13:07.677222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.677504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.677536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.677864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.677894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.678141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.678184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.678527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.678557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.678926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.678954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.679313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.679344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.679708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.679736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.680115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.680143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.680513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.680542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.680899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.680929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.681300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.681331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.681686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.681715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.682153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.682197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.682547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.682576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.682940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.682969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.683337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.683367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.683722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.683751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.684190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.684221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.684560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.684589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.684965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.684993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.685363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.685393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.685640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.685671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.686013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.686047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.686398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.686428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.686795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.686823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.687184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.687214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.687579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.687606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.687962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.687991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.688384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.688755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.688784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.689149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.689195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.689572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.689603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.689954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.689982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.690334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.690364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.690711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.690740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.691093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.691121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.691471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.691503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.691859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.691888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.692239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.692269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.692638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.692667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.693009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-11-20 17:13:07.693039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-11-20 17:13:07.693419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.693449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.693815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.693843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.694200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.694229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.694571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.694599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.694977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.695005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.695365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.695397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.695666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.696037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.696067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.696404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.696434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.696796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.696825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.697187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.697217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.697575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.697603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.697967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.697994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.698366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.698396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.698751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.698781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.699136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.699174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.699526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.699554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.699934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.699964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.700325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.700355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.700713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.700742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.701108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.701213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.701453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.701485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.701864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.701899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.702315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.702345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.702682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.702712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.703090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.703119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.703480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.703509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.703862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.703891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.704258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.704289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.704656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.704685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.705053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.705082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.705450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.705480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.705841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.705870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.706225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.706255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.706617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.706646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.707049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.707077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.707438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.707469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.707834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.707863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.708236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.708266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.708627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.708655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.708998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.709026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.709383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.709413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.709780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.709817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.710183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.710213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.710581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.710609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.710964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.710994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.711343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.711372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.570 [2024-11-20 17:13:07.711739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.570 [2024-11-20 17:13:07.711767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.570 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.712128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.712156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.712539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.712567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.712825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.712854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.713199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.713230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.713619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.713648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.714009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.714038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.714401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.714430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.714789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.714818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.715188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.715217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.715539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.715568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.715926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.715954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.716220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.716250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.716605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.716634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.716995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.717024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.717405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.717438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.571 [2024-11-20 17:13:07.717800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.571 [2024-11-20 17:13:07.717830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.571 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.718191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.718223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.718489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.718521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.718866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.718895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.719254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.719285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.719587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.719615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.719959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.719989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.720334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.720365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-11-20 17:13:07.720734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-11-20 17:13:07.720763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.721120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.721151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.721521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.721552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.721893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.721922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.722292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.722323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.722666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.722696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.723031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.723060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.723398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.723427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.723786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.723814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.724170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.724201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.724548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.724577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.724941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.724969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.725274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.725303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.725557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.725587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.725929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.725960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.726335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.726366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.726721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.726750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.727105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.727133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.727500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.727531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.727880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.727915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.728282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.728311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.728653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.728682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.728945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.728973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.729337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.729366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.729729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.729759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.730115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.730144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.730523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.730552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.730918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.730948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.731307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.731337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.731745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.731774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.732029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.732057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.732421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.732451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.732813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.732842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.733193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.733590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.733619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.733987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.734015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.734384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.734413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.734771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.734800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.735193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-11-20 17:13:07.735223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-11-20 17:13:07.735580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.735609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.735876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.735904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.736284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.736315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.736647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.736677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.737034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.737062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.737396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.737427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.737787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.737815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.738180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.738209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.738513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.738541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.738906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.738934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.739293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.739323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.739696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.739723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.739981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.740013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.740342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.740372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.740727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.740755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.741107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.741136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.741547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.741576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.741940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.741968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.742327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.742358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.742710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.742739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.743079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.743108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.743483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.743520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.743863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.743894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.744250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.744281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.744648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.744677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.745021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.745050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.745414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.745444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.745803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.745832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.746200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.746230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.746594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.746624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.746995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.747245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.747277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.747638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.747667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.748030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.748059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.748411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.748442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.748805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.748834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.749172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.749204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.749570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.749598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-11-20 17:13:07.749963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-11-20 17:13:07.749991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.750359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.750389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.750758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.750786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.751156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.751222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.751662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.751690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.752023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.752414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.752445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.752782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.752811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.753189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.753218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.753586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.753614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.753978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.754013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.754388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.754417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.754770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.754798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.755154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.755206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.755483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.755511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.755863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.755892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.756251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.756281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.756639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.756668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.757033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.757061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.757477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.757506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.757859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.757887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.758241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.758271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.758638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.758666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.759039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.759067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.759427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.759458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.759807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.759835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.760203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.760233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.760601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.760629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.760991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.761019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.761391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.761421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.761777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.762179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.762209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.762422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.762450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.762821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.762849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.763220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.763250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.763616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.763645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.764055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.764083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.764414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.764444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.764815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-11-20 17:13:07.764845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-11-20 17:13:07.765185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.765214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.765572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.765600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.765960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.765988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.766340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.766369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.766740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.766769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.767118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.767148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.767534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.767563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.767930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.767958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.768318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.768349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.768743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.768771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.769130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.769168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.769541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.769569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.769930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.770328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.770359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.770732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.771008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.771039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.771412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.771441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.771692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.771721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.772070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.772098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.772467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.772497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.772844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.772873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.773243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.773273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.773606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.773634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.773993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.774022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.774339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.774369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.774734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.774762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.775124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.775152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.775567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.775596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.775955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.775984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.776375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.776405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.776775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.776803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.777152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.777191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.777542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.777572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.777927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.777956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.778324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.778354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.778722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.778750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.779091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.779120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.779483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-11-20 17:13:07.779513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-11-20 17:13:07.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.779896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.780262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.780298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.780727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.780756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.781115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.781144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.781515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.781544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.781900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.781930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.782196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.782575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.782605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.782865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.782892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.783250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.783279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.783652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.783680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.784038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.784066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.784410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.784440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.784794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.784822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.785191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.785222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.785588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.785617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.785979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.786008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.786384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.786416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.786756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.786784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.787148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.787188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.787529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.787559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.787936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.787964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.788328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.788359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.788621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.788650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.788996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.789024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.789382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.789411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.789777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.789806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.790193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.790223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.790609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.790638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.790988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.791423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.791452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.791813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.848 [2024-11-20 17:13:07.791843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.848 qpair failed and we were unable to recover it. 00:30:15.848 [2024-11-20 17:13:07.792216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.792246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.792617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.792645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.793068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.793096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.793458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.793488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.793835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.793864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.794229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.794258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.794623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.794652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.795015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.795042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.795395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.795424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.795784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.795813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.796181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.796216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.796591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.796620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.796985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.797015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.797394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.797424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.797781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.797810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.798192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.798222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.798564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.798594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.798955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.798983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.799358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.799388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.799747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.799777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.800147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.800187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.800565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.800593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.800953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.800981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.801325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.801355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.801721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.801750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.802123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.802152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.802532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.802562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.802934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.802962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.803337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.803366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.803712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.803740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.804118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.804147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.804509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.804538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.804899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.804928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.805308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.805339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.805719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.805748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.806091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.806120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.806538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.806569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.806932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.806966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.849 qpair failed and we were unable to recover it. 00:30:15.849 [2024-11-20 17:13:07.807375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.849 [2024-11-20 17:13:07.807405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.807744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.807772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.808107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.808135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.808503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.808532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.808890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.808920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.809317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.809347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.809725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.809754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.810000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.810028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.810373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.810403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.810664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.810694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.811088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.811116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.811519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.811549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.811930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.812339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.812371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.812715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.812744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.813120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.813149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.813414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.813445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.813697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.813726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.814086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.814116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.814509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.814540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.814902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.814933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.815286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.815317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.815687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.815718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.816060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.816089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.816447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.816478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.816839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.816869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.817154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.817197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.817548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.817579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.817941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.817970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.818331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.818362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.818725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.818754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.819136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.819177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.819399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.819429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.819791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.819820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.820186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.820217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.820459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.820492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.820854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.820884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.821235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.850 [2024-11-20 17:13:07.821266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.850 qpair failed and we were unable to recover it. 00:30:15.850 [2024-11-20 17:13:07.821633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.821663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.822018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.822048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.822415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.822452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.822704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.822734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.823106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.823136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.823532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.823563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.823801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.823834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.824195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.824228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.824634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.824664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.824908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.824939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.825194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.825224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.825600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.825630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.825858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.825888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.826266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.826297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.826667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.826696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.827058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.827087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.827424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.827455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.827812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.827840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.828281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.828311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.828696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.828724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.829093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.829122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.829587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.829617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.829985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.830015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.830341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.830371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.830781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.830809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.831174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.831204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.831618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.831647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.832012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.832042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.832403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.832434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.832799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.832829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.833079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.833107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.833512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.833542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.833792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.833822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.834195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.834225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.834592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.834632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.834997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.835025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.835383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.835413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.835777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.835805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.851 [2024-11-20 17:13:07.836115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.851 [2024-11-20 17:13:07.836144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.851 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.836514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.836543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.836897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.836925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.837286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.837317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.837702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.837731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.837977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.838006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.838351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.838382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.838599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.838627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.838983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.839012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.839428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.839459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.839715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.839747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.840096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.840125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.840502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.840533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.840889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.840917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.841287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.841318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.841699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.841728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.842117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.842146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.842537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.842568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.842811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.842840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.843201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.843232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.843636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.843664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.844021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.844050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.844423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.844453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.844675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.844703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.845062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.845093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.845510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.845541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.845910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.845938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.846305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.846335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.846718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.846748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.847101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.847131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.847498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.847529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.847906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.848325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.848361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.848743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.848814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.849183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.849215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.852 [2024-11-20 17:13:07.849554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.852 [2024-11-20 17:13:07.849584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.852 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.849958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.849987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.850342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.850374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.850613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.850642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.851033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.851061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.851406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.851436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.851800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.851830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.852219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.852582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.852610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.852975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.853004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.853407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.853438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.853803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.853831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.854200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.854230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.854602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.854632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.855004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.855033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.855392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.855423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.855798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.855828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.856188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.856218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.856612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.856642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.856990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.857020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.857237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.857267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.857633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.858016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.858044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.858253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.858284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.858749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.858778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.859144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.859185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.859553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.859582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.859961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.859991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.860336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.860366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.860718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.860748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.861101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.861131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.861556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.861587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.861837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.861869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.862243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.862274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.862552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.862580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.862939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.862969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.863329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.863359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.863717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.853 [2024-11-20 17:13:07.863746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.853 qpair failed and we were unable to recover it. 00:30:15.853 [2024-11-20 17:13:07.864103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.864143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.864524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.864554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.864921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.864949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.865279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.865310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.865724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.866172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.866204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.866561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.866592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.866931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.866961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.867319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.867350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.867596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.867626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.867975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.868005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.868360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.868391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.868747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.868778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.869196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.869228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.869592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.869623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.869988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.870018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.870385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.870416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.870734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.870765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.871142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.871184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.871587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.871617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.871976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.872004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.872359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.872390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.872767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.872796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.873180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.873209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.873556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.873585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.873921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.873950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.874356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.874387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.874753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.874789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.875148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.875193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.875545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.875574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.875930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.875961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.876318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.876348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.876703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.876733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.877080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.877112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.877485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.877515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.877855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.877885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.878241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.878272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.878625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.878657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.878999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.854 [2024-11-20 17:13:07.879030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-11-20 17:13:07.879393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.879425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.879740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.879773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.880175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.880208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.880573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.880602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.880968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.880998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.881339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.881370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.881741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.881772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.882144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.882187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.882476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.882505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.882856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.882887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.883254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.883285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.883658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.883687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.884023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.884052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.884442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.884473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.884832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.884860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.885112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.885143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.885400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.885434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.885815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.885847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.886222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.886255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.886609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.886638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.886992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.887021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.887287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.887317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.887670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.887698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.888083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.888114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.888498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.888528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.888869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.888898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.889266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.889297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.889539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.889570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.889940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.890325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.890363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.890600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.890634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.891024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.891054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.891421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.891453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.891781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.891812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.892181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.892211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.892613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.892641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.892892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.892921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.893270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.855 [2024-11-20 17:13:07.893300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-11-20 17:13:07.893627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.893657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.894025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.894054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.894397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.894427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.894791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.894820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.895171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.895202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.895590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.895619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.895873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.895901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.896280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.896312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.896674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.896702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.897064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.897092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.897342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.897373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.897716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.898106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.898134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.898509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.898538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.898906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.898935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.899294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.899324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.899689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.899717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.900098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.900128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.900379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.900417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.900756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.900785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.901132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.901170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.901538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.901568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.901933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.901961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.902321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.902350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.902712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.902740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.903104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.903132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.903395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.903424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.903751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.903779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.904016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.904044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.904302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.904332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.904656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.904684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.905039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.905068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.905438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.905800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.905829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.906202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.906232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.906465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.906493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.906762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.906791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.907154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.907192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.856 [2024-11-20 17:13:07.907551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.856 [2024-11-20 17:13:07.907580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.856 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.907933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.907962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.908304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.908334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.908695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.908724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.909151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.909199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.909543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.909573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.909909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.909938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.910302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.910331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.910686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.910715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.911086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.911114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.911426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.911456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.911811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.911840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.912197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.912228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.912457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.912485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.912851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.912880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.913237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.913288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.913638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.913667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.914022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.914051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.914395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.914427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.914791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.914819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.915117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.915146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.915527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.915563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.915943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.915971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.916343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.916373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.916686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.916714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.917087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.917116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.917515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.917545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.917905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.917934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.918292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.918322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.918694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.918722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.919092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.919120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.919490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.919519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.919878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.919906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.920282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.920312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.857 [2024-11-20 17:13:07.920685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.857 [2024-11-20 17:13:07.920713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.857 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.921061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.921090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.921429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.921459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.921814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.921842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.922208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.922238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.922621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.922650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.923010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.923038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.923400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.923431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.923809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.923837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.924096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.924123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.924477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.924507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.924876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.924904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.925249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.925280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.925634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.925664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.926015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.926050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.926412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.926448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.926821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.926850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.927192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.927230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.927588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.927617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.927974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.928003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.928382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.928412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.928776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.928805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.929212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.929243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.929583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.929613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.929972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.930000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.930360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.930391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.930635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.931016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.931044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.931386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.931416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.931791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.931822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.932202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.932232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.932586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.932614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.932987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.933016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.933423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.933453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.933814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.933842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.934205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.934235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.934595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.934625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.935057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.858 [2024-11-20 17:13:07.935084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.858 qpair failed and we were unable to recover it. 00:30:15.858 [2024-11-20 17:13:07.935418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.935448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.935805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.935835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.936203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.936232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.936500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.936528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.936873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.936902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.937144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.937181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.937554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.937583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.937945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.937974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.938256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.938505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.938535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.938918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.938947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.939312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.939343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.939696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.939724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.939981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.940009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.940283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.940312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.940696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.940724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.941094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.941123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.941502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.941544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.941966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.941996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.942328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.942358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.942620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.942648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.942987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.943015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.943404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.943433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.943803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.943831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.944143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.944181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.944534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.944563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.944817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.944848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.945207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.945237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.945555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.945584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.945939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.945968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.946215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.946246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.946602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.946631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.946995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.947026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.947418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.947779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.947807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.948154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.948195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.948548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.948577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.948935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.948963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.859 [2024-11-20 17:13:07.949324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.859 [2024-11-20 17:13:07.949355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.859 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.949719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.949747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.950107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.950136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.950526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.950557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.950926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.950954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.951194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.951227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.951581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.951610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.951869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.951897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.952252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.952282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.952520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.952548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.952912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.952941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.953301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.953331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.953688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.953717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.954080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.954109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.954487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.954517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.954870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.954898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.955270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.955301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.955550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.955581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.955932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.955960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.956333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.956731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.956760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.957167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.957531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.957559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.957920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.957950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.958313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.958342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.958706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.958734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.959092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.959130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.959495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.959525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.959883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.959911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.960285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.960315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.960690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.960718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.961086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.961113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.961411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.961440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.961799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.961827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.961987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.962016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.962385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.962414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.962778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.962807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.963173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.963203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.963565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.963593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.860 qpair failed and we were unable to recover it. 00:30:15.860 [2024-11-20 17:13:07.964014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.860 [2024-11-20 17:13:07.964044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.964392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.964422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.964786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.964815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.965172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.965203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.965535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.965564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.965972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.966000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.966424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.966453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.966856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.966884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.967248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.967285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.967550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.967579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.967858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.967886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.968247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.968277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.968649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.968679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.969077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.969106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.969476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.969505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.969901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.969930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.970283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.970314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.970656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.970685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.971047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.971075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.971535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.971565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.971827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.971855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.972198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.972228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.972599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.972628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.972987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.973016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.973383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.973412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.973773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.973801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.974136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.974174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.974521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.974549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.974908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.974936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.975349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.975379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.975737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.975766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.976123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.976151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.976508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.976537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.976901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.976928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.977297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.977328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.977685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.977712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.978084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.978112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.861 [2024-11-20 17:13:07.978524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.861 [2024-11-20 17:13:07.978554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.861 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.978997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.979025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.979390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.979422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.979582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.979614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.979959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.979988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.980339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.980732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.980759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.981116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.981145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.981528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.981556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.981933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.981961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.982312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.982344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.982599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.982627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.982994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.983023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.983382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.983411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.983775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.983803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.984045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.984073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.984450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.984480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.984893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.984921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.985290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.985679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.985707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.986040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.986068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.986448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.986478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.986740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.986769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.987117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.987146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.987515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.987544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.987781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.987813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.988180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.988210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.988557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.988586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.988959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.988988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.989332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.989363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.989791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.989819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.990182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.990213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.990566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.990594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.990951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.990979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.862 [2024-11-20 17:13:07.991346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.862 [2024-11-20 17:13:07.991375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.862 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.991744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.991773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.992010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.992038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.992411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.992440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.992821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.992849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.993195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.993232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.993605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.993633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.994002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.994030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.994394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.994426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.994792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.994821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.995180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.995209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.995434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.995462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.995829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.995857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.996132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.996168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.996532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.996561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.996950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.996978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.997355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.997385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.997729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.997758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.998171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.998201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.998575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.998603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.998978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.999007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.999367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.999398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:07.999758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:07.999787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.000028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.000059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.000412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.000444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.000811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.000840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.001206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.001235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.001650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.001679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.002045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.002073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.002371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.002400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.002745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.002773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.003136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.003179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.003538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.003568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.003825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.003854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.004193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.004224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.004587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.004615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.004990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.005018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:15.863 [2024-11-20 17:13:08.005362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.863 [2024-11-20 17:13:08.005393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:15.863 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.005740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.005771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.006152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.006195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.006558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.006586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.006940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.006968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.007411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.007440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.007783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.007813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.008184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.008215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.008609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.008637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.008891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.008927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.009327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.009357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.009711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.009739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.010103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.010131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.010440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.010469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.010831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.010859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.011227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.011256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.011595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.011623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.011980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.012009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.012371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.012401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.012756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.012784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.013145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.013185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.013565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.013593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.013971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.013999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.014422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.014452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.014799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.014829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.015189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.015219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.015561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.015590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.015848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.015877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.016225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.016259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.016616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.016645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.017005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.017035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.017314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.017343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.017709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-11-20 17:13:08.017737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-11-20 17:13:08.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.018119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.018477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.018508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.018870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.019299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.019343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.019718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.019748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.020106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.020134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.020496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.020526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.020857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.020887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.021245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.021276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.021717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.021745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.022102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.022132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.022454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.022484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.022859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.022888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.023263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.023293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.023583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.023612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.023981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.024011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.024381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.024411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.024788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.024817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.025173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.025204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.025542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.025572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.025928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.025955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.026302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.026333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.026711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.026740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.027116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.027144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.027506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.027536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.027888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.027917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.028275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.028307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.028659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.028688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.029052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.029080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.029439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.029470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.029867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.029896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.030244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.030274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.030635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.030664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.031029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.031059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.031393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.031423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.031780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.031810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.032187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.032219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.032580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-11-20 17:13:08.032608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-11-20 17:13:08.032975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.033004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.033392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.033422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.033793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.033823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.034188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.034218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.034554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.034584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.034902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.034930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.035306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.035344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.035706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.035736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.036116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.036145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.036387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.036420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.036781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.036811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.037183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.037215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.037564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.037594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.037832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.037861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.038197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.038226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.038628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.038656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.039025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.039426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.039457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.039829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.039858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.040221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.040252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.040601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.040631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.040996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.041025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.041382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.041413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.041674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.041703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.042049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.042078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.042498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.042528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.042871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.042901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.043129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.043184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.043546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.043576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.043943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.043973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.044333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.044363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.044765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.045121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.045151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.045520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.045556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.045919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.045949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.046310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.046341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.046709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.046739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.047091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.047121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-11-20 17:13:08.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-11-20 17:13:08.047511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.047776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.047804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.048157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.048535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.048564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.048919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.048948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.049319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.049348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.049739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.050081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.050110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.050497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.050526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.050873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.050903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.051270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.051302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.051670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.051698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.051945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.051974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.052417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.052448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.052778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.052808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.053172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.053203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.053626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.053656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.054016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.054045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.054397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.054428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.054757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.054786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.055146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.055210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.055621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.055650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.056013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.056043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.056413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.056445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.056795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.056825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.057066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.057095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.057547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.057577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.057932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.057960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.058220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.058249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.058616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.058644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.058981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.059275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.059306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.059562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.059592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.059951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.059981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.060352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.060384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.060742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.060771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.061127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.061174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.061549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.061578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-11-20 17:13:08.061710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-11-20 17:13:08.061741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.062102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.062133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.062495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.062526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.062885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.062914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.063316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.063347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.063747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.063777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.064200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.064529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.064559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.064955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.065260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.065291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.065654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.065683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.066041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.066069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.066415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.066446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.066818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.066848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.067201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.067231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.067593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.067622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.067980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.068008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.068326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.068358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.068739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.068767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.069128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.069157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.069561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.069590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.069941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.069969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.070340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.070369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.070749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.070777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.071140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.071179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.071429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.071464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.071860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.071889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.072260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.072289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.072720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.073117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.073146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.073525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.073554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.073930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.073960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.074331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.143 [2024-11-20 17:13:08.074361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.143 qpair failed and we were unable to recover it. 00:30:16.143 [2024-11-20 17:13:08.074729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.074757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.075122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.075150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.075532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.075561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.075924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.075955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.076302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.076331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.076670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.076699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.077058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.077087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.077452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.077482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.077847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.077875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.078242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.078273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.078643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.078675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.078935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.078967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.079393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.079424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.079687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.079718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.080084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.080114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.080416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.080818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.080848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.081216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.081245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.081611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.081641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.081986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.082015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.082375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.082405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.082770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.082799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.083149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.083190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.083574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.083602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.083897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.083925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.084292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.084322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.084681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.084710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.084961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.084989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.085284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.085315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.085673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.085709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.086073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.086103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.086498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.086529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.086891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.086920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.087278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.087315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.087670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.087699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.088051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.088080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.088456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.088485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.088864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.088892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.144 qpair failed and we were unable to recover it. 00:30:16.144 [2024-11-20 17:13:08.089121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.144 [2024-11-20 17:13:08.089149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.089565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.089594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.089953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.089981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.090245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.090274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.090656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.090685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.091062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.091093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.091453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.091484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.091744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.091773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.092142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.092184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.092457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.092486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.092833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.092862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.093237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.093267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.093632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.093661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.094035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.094064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.094410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.094441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.094818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.094847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.095268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.095298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.095689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.095719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.095966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.095998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.096372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.096402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.096717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.096747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.097107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.097136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.097505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.097536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.097924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.097954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.098217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.098248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.098512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.098541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.098916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.098944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.099201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.099234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.099519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.099549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.099901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.099929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.100143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.100186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.100551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.100580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.100966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.100995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.101371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.101400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.101757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.101786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.102238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.102269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.102539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.102568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.102940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.102969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.103330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.145 [2024-11-20 17:13:08.103361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.145 qpair failed and we were unable to recover it. 00:30:16.145 [2024-11-20 17:13:08.103733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.103764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.103978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.104008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.104254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.104288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.104645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.104674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.105027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.105058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.105382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.105412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.105801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.105830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.106188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.106218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.106477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.106508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.106842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.106871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.107217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.107249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.107631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.107660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.107922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.107950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.108322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.108354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.108654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.108683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.109044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.109073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.109414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.109444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.109783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.109812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.110176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.110207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.110459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.110488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.110664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.110696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.111078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.111107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.111470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.111500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.111869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.111898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.112307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.112344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.112715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.113075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.113104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.113338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.113368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.113716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.113745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.114127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.114155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.114519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.114548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.114894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.114922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.115296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.115327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.115764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.115794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.116173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.116204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.116590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.116620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.116982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.117011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.117418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.146 [2024-11-20 17:13:08.117448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.146 qpair failed and we were unable to recover it. 00:30:16.146 [2024-11-20 17:13:08.117883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.117914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.118183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.118214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.118580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.118608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.118984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.119012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.119382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.119414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.119773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.119803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.120178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.120209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.120587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.120615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.120984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.121012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.121384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.121413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.121785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.121813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.122190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.122220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.122342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.122372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.122722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.122752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.123031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.123409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.123441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.123869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.123898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.124148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.124189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.124433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.124464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.124829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.124858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.125204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.125235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.125601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.125631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.125993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.126021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.126262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.126295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.126654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.126684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.126928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.126957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.127326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.127356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.127730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.127764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.128109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.128140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.128501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.128531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.128903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.128931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.129294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.129326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.129681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.129710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.129922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.129950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.130338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.130368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.130722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.130751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.131010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.131042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.131387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.131417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.147 qpair failed and we were unable to recover it. 00:30:16.147 [2024-11-20 17:13:08.131654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.147 [2024-11-20 17:13:08.131684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.132042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.132319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.132348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.132682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.132712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.133069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.133097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.133442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.133472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.133723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.133752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.134131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.134171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.134521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.134549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.134899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.134929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.135292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.135323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.135765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.135793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.136123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.136151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.136541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.136578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.136939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.136967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.137315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.137345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.137734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.137770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.138136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.138175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.138582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.138611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.138972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.139000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.139294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.139324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.139594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.139623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.139972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.140001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.140355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.140385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.140747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.140775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.141123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.141151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.141533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.141563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.141933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.141960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.142311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.142342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.142715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.142743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.142987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.143015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.143351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.143381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.143745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.143774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.144021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.148 [2024-11-20 17:13:08.144050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.148 qpair failed and we were unable to recover it. 00:30:16.148 [2024-11-20 17:13:08.144385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.144414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.144786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.144814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.145169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.145200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.145568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.145848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.145876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.146244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.146274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.146612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.146641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.146999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.147027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.147412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.147557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.147589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.148007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.148037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.148381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.148410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.148775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.148804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.149035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.149067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.149416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.149446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.149806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.149835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.150089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.150117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.150512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.150542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.150901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.150929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.151287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.151317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.151675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.151704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.152078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.152107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.152346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.152376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.152615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.152655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.152997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.153025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.153405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.153435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.153770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.153800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.154173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.154202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.154576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.154604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.154947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.154976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.155339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.155369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.155739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.155767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.156212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.156242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.156596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.156625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.156860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.156888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.157254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.157284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.157585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.157615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.157972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.158000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.149 [2024-11-20 17:13:08.158384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.149 [2024-11-20 17:13:08.158414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.149 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.158776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.158803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.159181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.159210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.159579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.159607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.160033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.160062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.160393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.160424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.160822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.161186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.161216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.161567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.161595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.161955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.161983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.162330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.162360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.162733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.162764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.163124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.163197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.163548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.163577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.163998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.164026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.164461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.164491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.164841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.164868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.165120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.165147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.165504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.165531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.165908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.165935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.166314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.166342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.166731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.166758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.167113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.167140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.167479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.167507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.167858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.167884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.168264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.168293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.168660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.168690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.169058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.169087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.169353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.169382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.169654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.169683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.170042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.170072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.170329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.170360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.170736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.170767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.171128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.171168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.171526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.171557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.171936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.171965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.172371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.172599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.172629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.172898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.150 [2024-11-20 17:13:08.172926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.150 qpair failed and we were unable to recover it. 00:30:16.150 [2024-11-20 17:13:08.173363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.173393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.173757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.173787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.174148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.174190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.174551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.174581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.174949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.174979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.175240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.175270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.175639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.175668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.175908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.175937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.176296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.176326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.176571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.176601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.176959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.176988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.177359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.177390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.177741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.177771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.177999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.178030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.178236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.178272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.178645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.178676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.178932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.178962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.179363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.179394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.179750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.179780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.180139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.180192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.180542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.180571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.180935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.180965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.181325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.181357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.181708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.181737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.182123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.182152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.182533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.182564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.183019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.183048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.183361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.183749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.183780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.184026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.184056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.184391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.184422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.184782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.184812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.185184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.185215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.185619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.185648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.186000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.186029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.186407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.186438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.186682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.186711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.187046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.187076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.187448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.151 [2024-11-20 17:13:08.187479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.151 qpair failed and we were unable to recover it. 00:30:16.151 [2024-11-20 17:13:08.187874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.187903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.188151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.188193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.188492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.188528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.188890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.188919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.189311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.189684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.189713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.190078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.190112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.190488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.190519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.190880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.190909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.191143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.191184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.191517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.191545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.191908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.191936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.192308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.192337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.192603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.192632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.193001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.193031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.193284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.193313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.193625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.193655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.193987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.194016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.194363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.194394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.194760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.194788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.195199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.195229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.195576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.195605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.195971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.195999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.196270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.196300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.196674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.196702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.196938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.196969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.197214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.197244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.197411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.197441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.197802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.197830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.198071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.198099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.198387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.198418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.198762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.198791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.199156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.199196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.199525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.199554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.152 qpair failed and we were unable to recover it. 00:30:16.152 [2024-11-20 17:13:08.199923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.152 [2024-11-20 17:13:08.199951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.200314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.200343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.200707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.200734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.201090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.201119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.201496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.201527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.201867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.201897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.202270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.202300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.202653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.202682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.203045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.203072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.203316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.203355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.203738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.203767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.204112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.204141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.204504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.204533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.204770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.204801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.205097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.205126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.205499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.205529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.205874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.205903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.206218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.206249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.206584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.206612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.206993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.207022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.207392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.207424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.207789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.207817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.208253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.208282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.208534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.208562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.208910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.208939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.209307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.209337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.209702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.209731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.210178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.210208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.210567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.210596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.210972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.211372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.211403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.211754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.211783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.212148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.212187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.212544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.212573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.212940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.212968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.213320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.213352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.213720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.214111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.214140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.153 [2024-11-20 17:13:08.214503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.153 [2024-11-20 17:13:08.214532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.153 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.214906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.214934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.215303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.215333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.215728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.216098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.216126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.216530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.216560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.216906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.216935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.217289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.217318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.217690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.217719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.218084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.218112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.218363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.218396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.218781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.218809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.219225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.219256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.219618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.219645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.220100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.220127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.220501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.220530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.220880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.220910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.221274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.221304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.221669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.221697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.222054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.222082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.222418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.222448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.222805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.222834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.223203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.223232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.223609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.223638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.224015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.224044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.224411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.224441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.224851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.224880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.225232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.225263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.225655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.225683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.226067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.226355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.226695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.226724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.227097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.227125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.227511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.227541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.227881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.227908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.228253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.228283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.228559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.228587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.229010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.229038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.229291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.229321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.154 qpair failed and we were unable to recover it. 00:30:16.154 [2024-11-20 17:13:08.229606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.154 [2024-11-20 17:13:08.229640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.230003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.230032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.230422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.230452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.230809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.230836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.231218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.231247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.231595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.231624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.231970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.231998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.232342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.232373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.232722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.232749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.233122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.233152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.233521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.233551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.233910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.233938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.234186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.234219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.234577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.234606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.234984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.235012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.235380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.235409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.235799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.235835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.236123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.236151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.236565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.236594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.236958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.237341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.237371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.237707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.237736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.238101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.238129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.238495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.238525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.238779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.238810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.239157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.239543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.239572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.239933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.239962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.240331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.240361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.240721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.240749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.241115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.241143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.241570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.241599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.241955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.241983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.242360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.242390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.242765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.242794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.243167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.243197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.243519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.243547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.243908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.243937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.244298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.155 [2024-11-20 17:13:08.244328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.155 qpair failed and we were unable to recover it. 00:30:16.155 [2024-11-20 17:13:08.244592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.244619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.244976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.245005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.245378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.245410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.245773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.245802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.246177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.246208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.246547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.246576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.246939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.246967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.247317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.247348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.247602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.247630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.247982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.248010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.248368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.248398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.248765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.248793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.249167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.249198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.249557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.249586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.250032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.250060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.250432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.250462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.250855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.250885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.251266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.251631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.251661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.252027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.252055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.252430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.252459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.252816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.252844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.253215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.253245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.253493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.253523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.253912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.254296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.254325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.254665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.254693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.255026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.255054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.255428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.255458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.255811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.255846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.256248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.256278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.256637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.256665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.257018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.257047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.257381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.257411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.257785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.257813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.258179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.258208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.258558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.258587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.258949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.258978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.156 [2024-11-20 17:13:08.259383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.156 [2024-11-20 17:13:08.259413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.156 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.259741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.259771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.260134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.260171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.260530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.260558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.260926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.260954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.261302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.261334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.261683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.261712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.262059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.262088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.262456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.262486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.262869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.262897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.263226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.263256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.263661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.263690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.264028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.264057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.264401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.264430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.264771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.264799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.265186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.265216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.265613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.265641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.266020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.266396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.266754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.266784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.267132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.267530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.267559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.267921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.267950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.268326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.268356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.268715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.268745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.269093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.269123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.269505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.269536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.269883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.269911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.270247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.270276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.270640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.270668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.271025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.271053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.271405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.271434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.271811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.271845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.272198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.157 [2024-11-20 17:13:08.272229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.157 qpair failed and we were unable to recover it. 00:30:16.157 [2024-11-20 17:13:08.272583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.272611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.272980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.273007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.273329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.273360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.273730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.273758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.274126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.274154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.274525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.274914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.274942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.275289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.275319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.275678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.275708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.276074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.276103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.276457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.276487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.276746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.276775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.277130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.277181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.277545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.277575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.277912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.277942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.278295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.278720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.278749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.279095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.279124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.279503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.279533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.279897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.279925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.280286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.280316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.280678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.280706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.281068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.281096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.281458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.281488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.281840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.281868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.282238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.282299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.282656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.282686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.283053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.283081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.283496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.283525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.283846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.283874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.284243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.284272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.284617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.284645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.285017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.285047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.285419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.285449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.285830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.285858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.286205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.286237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.286584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.286613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.286987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.287015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.158 qpair failed and we were unable to recover it. 00:30:16.158 [2024-11-20 17:13:08.287391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.158 [2024-11-20 17:13:08.287421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.287786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.287815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.288170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.288200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.288546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.288577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.288935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.288964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.289325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.289355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.289708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.289736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.290087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.290114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.290482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.290513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.290857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.290887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.291259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.291289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.291657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.291686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.291929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.291960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.292311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.292341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.292668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.292698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.293062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.293092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.293460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.293490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.293853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.293882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.294249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.294279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.294657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.294685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.294936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.294964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.295295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.295326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.295695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.295722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.296071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.296100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.296464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.296494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.296861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.296888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.297267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.297297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.297683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.297712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.159 [2024-11-20 17:13:08.298060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.159 [2024-11-20 17:13:08.298095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.159 qpair failed and we were unable to recover it. 00:30:16.432 [2024-11-20 17:13:08.298472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.432 [2024-11-20 17:13:08.298504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.432 qpair failed and we were unable to recover it. 00:30:16.432 [2024-11-20 17:13:08.298866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.432 [2024-11-20 17:13:08.298896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.432 qpair failed and we were unable to recover it. 00:30:16.432 [2024-11-20 17:13:08.299258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.432 [2024-11-20 17:13:08.299289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.432 qpair failed and we were unable to recover it. 00:30:16.432 [2024-11-20 17:13:08.299665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.432 [2024-11-20 17:13:08.299693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.432 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.300049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.300077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.300339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.300368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.300609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.300641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.301002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.301030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.301373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.301403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.301773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.301811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.302154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.302194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.302609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.302637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.302971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.303000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.303386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.303416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.303780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.303808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.304157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.304572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.304600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.304966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.304994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.305374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.305403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.305758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.305786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.306032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.306064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.306351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.306382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.306732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.306760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.307122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.307528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.307556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.307876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.307904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.308282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.308320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.308669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.308699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.309059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.309087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.309425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.309454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.309901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.309930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.310299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.310329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.310728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.311109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.311137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.311509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.311538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.311904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.311933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.312307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.312338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.312581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.312613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.312982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.313011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.313348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.313752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.313781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.433 qpair failed and we were unable to recover it. 00:30:16.433 [2024-11-20 17:13:08.314025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.433 [2024-11-20 17:13:08.314053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.314430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.314460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.314804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.314832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.315195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.315225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.315639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.315668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.316010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.316039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.316386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.316417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.316783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.316812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.317254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.317655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.317686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.318039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.318068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.318453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.318483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.318844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.318872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.319240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.319271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.319529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.319558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.319905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.319933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.320300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.320331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.320739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.320768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.321177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.321568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.321597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.321947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.321976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.322322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.322351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.322726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.322755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.323121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.323149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.323402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.323431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.323797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.323826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.324196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.324234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.324617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.324646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.324961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.324989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.325430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.325460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.325843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.326206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.326236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.326602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.327006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.327034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.327288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.327318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.327710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.327739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.328105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.328133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.328495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.328524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.328899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.434 [2024-11-20 17:13:08.328928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.434 qpair failed and we were unable to recover it. 00:30:16.434 [2024-11-20 17:13:08.329309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.329340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.329732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.329761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.330114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.330143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.330478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.330507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.330842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.330871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.331232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.331261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.331637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.331667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.332033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.332062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.332409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.332438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.332788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.332818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.333197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.333228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.333599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.333628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.333986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.334015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.334388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.334417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.334832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.334870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.335199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.335230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.335521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.335549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.335938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.336310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.336340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.336708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.336738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.336998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.337027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.337278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.337307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.337677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.337705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.338065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.338094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.338461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.338492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.338691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.338723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.338951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.338979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.339338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.339368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.339735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.339764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.340127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.340155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.340507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.340536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.340882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.340911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.341370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.341401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.341748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.341777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.342028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.342057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.342299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.342331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.342667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.342696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.343058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.343087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.435 qpair failed and we were unable to recover it. 00:30:16.435 [2024-11-20 17:13:08.343450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.435 [2024-11-20 17:13:08.343481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.343850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.343879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.344306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.344679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.344710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.345074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.345103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.345454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.345484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.345852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.345880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.346244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.346274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.346620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.346648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.346961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.346990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.347305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.347335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.347663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.347694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.348063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.348093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.348312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.348341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.348714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.348745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.349109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.349137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.349515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.349545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.349915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.349949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.350324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.350358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.350807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.350836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.351046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.351075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.351408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.351438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.351785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.351814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.352273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.352302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.352686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.352714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.353086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.353114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.353463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.353494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.353789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.353817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.354184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.354216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.354562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.354592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.354962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.354991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.355375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.355405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.355774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.355805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.356174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.356204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.356422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.356451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.356818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.356847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.357222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.357254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.357625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.357653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.358039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.436 [2024-11-20 17:13:08.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.436 qpair failed and we were unable to recover it. 00:30:16.436 [2024-11-20 17:13:08.358321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.358351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.358731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.358760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.359009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.359040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.359413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.359442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.359794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.359827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.360217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.360249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.360518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.360548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.360909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.360938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.361370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.361401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.361774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.361802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.362049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.362078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.362312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.362346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.362728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.362757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.363181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.363212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.363555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.363585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.363884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.363912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.364331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.364361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.364732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.364762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.365122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.365151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.365556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.365588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.365822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.365851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.366182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.366217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.366476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.366508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.366904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.366933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.367231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.367263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.367566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.367596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.367929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.367958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.368217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.368246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.368476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.368504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.368856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.368887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.369148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.369458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.369489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.369835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.437 [2024-11-20 17:13:08.369865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.437 qpair failed and we were unable to recover it. 00:30:16.437 [2024-11-20 17:13:08.370156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.370197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.370567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.370595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.370860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.370888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.371090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.371120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.371529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.371882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.371913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.372246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.372276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.372616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.372645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.372867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.372898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.373121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.373150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.373423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.373452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.373806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.373834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.374193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.374224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.374478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.374517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.374898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.375282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.375314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.375694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.375723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.375948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.375980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.376217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.376248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.376575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.376604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.376990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.377334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.377363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.377741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.378131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.378170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.378543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.378572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.378937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.378965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.379221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.379250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.379663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.379691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.380019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.380047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.380400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.380430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.380821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.380850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.381239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.381269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.381629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.381656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.382006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.382035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.382384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.382413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.382742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.382771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.383131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.383169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.383564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.438 [2024-11-20 17:13:08.383914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.438 [2024-11-20 17:13:08.383943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.438 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.384300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.384330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.384685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.384713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.384975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.385003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.385385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.385415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.385761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.385789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.386144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.386183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.386561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.386590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.386949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.386977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.387331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.387360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.387706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.387735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.388092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.388121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.388496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.388525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.388890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.388919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.389202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.389239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.389583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.389612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.389970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.390006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.390379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.390408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.390645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.390673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.391024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.391052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.391483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.391828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.391857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.392092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.392121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.392539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.392570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.392901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.392931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.393197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.393227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.393527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.393556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.393942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.393970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.394325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.394356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.394718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.394747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.395030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.395058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.395401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.395431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.395790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.395818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.396074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.396103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.396482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.396512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.396895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.396923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.397290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.397321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.397582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.397610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.397977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.439 [2024-11-20 17:13:08.398005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.439 qpair failed and we were unable to recover it. 00:30:16.439 [2024-11-20 17:13:08.398404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.398434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.398803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.398831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.399213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.399242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.399619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.399646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.399984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.400020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.400427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.400456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.400804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.400834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.401207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.401238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.401617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.401647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.401880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.401912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.402281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.402311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.402693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.402722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.403120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.403148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.403608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.403638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.404017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.404046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.404369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.404399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.404742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.404770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.405137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.405179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.405538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.405567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.405965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.405993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.406254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.406284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.406681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.407044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.407073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.407492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.407521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.407778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.407809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.408207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.408237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.408608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.408637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.408894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.408923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.409265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.409295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.409666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.409694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.410092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.410120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.410535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.410567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.410813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.410841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.411213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.411243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.411630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.411658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.412026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.412054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.412498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.412527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.412887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.440 [2024-11-20 17:13:08.412917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.440 qpair failed and we were unable to recover it. 00:30:16.440 [2024-11-20 17:13:08.413289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.413319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.413680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.413708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.414076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.414114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.414466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.414495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.414866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.414894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.415246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.415277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.415550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.415579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.415830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.415865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.416224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.416254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.416598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.416627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.416998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.417027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.417436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.417466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.417808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.417838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.418216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.418246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.418617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.418645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.418897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.418925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.419288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.419319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.419699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.419727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.420073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.420102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.420566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.420596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.420936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.420966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.421221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.421252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.421624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.421653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.422015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.422043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.422403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.422434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.422881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.422911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.423183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.423213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.423552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.423581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.423946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.423974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.424340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.424369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.424715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.424744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.424964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.424996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.425367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.425398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.425646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.425675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.426030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.426066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.426452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.426843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.426871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.441 [2024-11-20 17:13:08.427234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.441 [2024-11-20 17:13:08.427265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.441 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.427626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.427655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.428025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.428053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.428428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.428458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.428691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.428718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.429141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.429420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.429452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.429837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.429866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.430230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.430260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.430642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.430671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.431042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.431070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.431455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.431487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.431838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.431867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.432237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.432269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.432499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.432530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.432903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.432931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.433203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.433234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.433526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.433555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.433922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.433952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.434262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.434292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.434673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.434702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.435052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.435082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.435484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.435514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.435869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.435896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.436244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.436274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.436651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.436682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.437039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.437068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.437318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.437350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.437607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.437637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.437884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.437911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.438292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.438322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.438581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.438610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.438901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.438929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.439279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.439311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.439671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.440060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.442 [2024-11-20 17:13:08.440089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.442 qpair failed and we were unable to recover it. 00:30:16.442 [2024-11-20 17:13:08.440441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.440471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.440803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.440831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.441156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.441204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.441609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.441638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.441998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.442026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.442387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.442418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.442794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.442822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.443188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.443218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.443641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.443669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.443982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.444011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.444381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.444411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.444778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.444806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.445181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.445211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.445594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.445622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.446027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.446055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.446455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.446485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.446842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.446871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.447110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.447139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.447544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.447574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.447929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.447958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.448259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.448289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.448642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.448671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.448910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.448939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.449223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.449253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.449508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.449540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.449877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.449907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.450173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.450204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.450569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.450599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.450979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.451008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.451347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.451383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.451725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.451754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.452006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.452034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.452390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.452421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.452782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.452810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.453179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.453211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.453582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.453611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.453973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.454001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.454351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.454381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.443 qpair failed and we were unable to recover it. 00:30:16.443 [2024-11-20 17:13:08.454726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.443 [2024-11-20 17:13:08.454754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.455117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.455146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.455597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.456028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.456057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.456392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.456424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.456768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.456797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.457153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.457197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.457569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.457933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.457961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.458337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.458366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.458744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.458774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.459146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.459187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.459435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.459464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.459702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.459731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.460082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.460113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.460513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.460544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.460929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.460957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.461293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.461322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.461673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.461701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.462078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.462107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.462497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.462528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.462892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.462920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.463288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.463318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.463703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.463731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.464077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.464105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.464375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.464408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.464794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.464823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.465192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.465223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.465574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.465602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.465977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.466008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.466371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.466400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.466804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.466834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.467186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.467226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.467519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.467549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.467967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.467996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.468353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.468383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.468752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.468781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.469145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.469211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.444 [2024-11-20 17:13:08.469469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.444 [2024-11-20 17:13:08.469498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.444 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.469861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.470175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.470205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.470567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.470597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.471000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.471028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.471381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.471414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.471749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.471778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.472147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.472196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.472583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.472617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.472955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.472985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.473342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.473373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.473693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.473723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.474056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.474083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.474457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.474811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.474841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.475198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.475228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.475624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.475662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.476027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.476057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.476411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.476441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.476801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.476830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.476970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.477010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.477403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.477433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.477797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.477826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.478180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.478212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.478588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.478617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.478976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.479005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.479282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.479313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.479690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.479720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.480083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.480114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.480486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.480517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.480850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.480880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.481245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.481275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.481655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.481683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.481933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.481961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.482388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.482418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.482756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.482785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.483172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.483211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.483556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.483584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.483857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.483885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.445 [2024-11-20 17:13:08.484257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.445 [2024-11-20 17:13:08.484288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.445 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.484555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.484583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.484975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.485004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.485354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.485385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.485722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.485750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.486127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.486156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.486552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.486581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.486830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.486858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.487258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.487288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.487667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.487938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.487967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.488381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.488413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.488628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.488656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.489020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.489048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.489391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.489421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.489811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.489840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.490226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.490255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.490563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.490592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.490835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.490868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.491258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.491287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.491658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.491687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.492060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.492088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.492445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.492476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.492826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.492861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.493157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.493200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.493584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.493613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.493954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.493984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.494338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.494368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.494737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.494766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.495134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.495173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.495500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.495529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.495871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.495899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.496255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.496285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.496654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.496683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.446 [2024-11-20 17:13:08.497060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.446 [2024-11-20 17:13:08.497088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.446 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.497512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.497542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.497916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.497944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.498294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.498324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.498691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.498720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.499150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.499190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.499604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.499632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.499971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.499999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.500339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.500370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.500711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.500740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.500980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.501009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.501393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.501423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.501793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.501823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.502189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.502219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.502464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.502492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.502813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.502841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.503196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.503227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.503586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.503615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.503954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.503982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.504339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.504369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.504721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.504750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.505012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.505041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.505454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.505484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.505825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.505854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.506218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.506248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.506625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.506654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.507040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.507383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.507412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.507787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.507815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.508192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.508224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.508581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.508611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.508976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.509004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.509381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.509411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.509776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.509805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.510037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.510068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.510434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.510465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.510802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.510831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.511220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.511250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.511596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.447 [2024-11-20 17:13:08.511625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.447 qpair failed and we were unable to recover it. 00:30:16.447 [2024-11-20 17:13:08.511988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.512017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.512261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.512292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.512653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.512681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.513040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.513068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.513487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.513517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.513888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.513920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.514454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.514491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.514848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.514883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.515277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.515307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.515652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.515681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.516041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.516072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.516479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.516511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.516852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.516881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.517137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.517179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.517411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.517439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.517780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.517810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.518254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.518284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.518651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.518681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.519033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.519070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.519445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.519830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.519858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.520225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.520254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.520604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.520632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.520979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.521008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.521351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.521382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.521729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.521758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.522116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.522144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.522643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.523000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.523030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.523281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.523312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.523688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.523718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.524075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.524104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.524401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.524432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.524775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.524803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.525182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.525214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.525538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.525567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.525941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.525969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.526332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.526362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.526801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.448 [2024-11-20 17:13:08.526830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.448 qpair failed and we were unable to recover it. 00:30:16.448 [2024-11-20 17:13:08.527207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.527238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.527626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.527655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.527995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.528023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.528387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.528416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.528827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.528855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.529220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.529249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.529603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.529633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.529999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.530029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.530374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.530404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.530770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.530798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.531201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.531232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.531600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.531631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.531987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.532016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.532400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.532432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.532674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.532706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.532969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.532998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.533240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.533269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.533690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.533718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.534078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.534108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.534473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.534503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.534873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.534909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.535257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.535289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.535662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.535691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.536060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.536090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.536462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.536493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.536834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.536864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.537232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.537280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.537508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.537539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.537907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.537936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.538307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.538337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.538693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.538722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.539065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.539094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.539442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.539473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.539854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.539883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.540235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.540264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.540625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.540657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.540882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.540911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.541276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.541308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.449 [2024-11-20 17:13:08.541678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.449 [2024-11-20 17:13:08.541707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.449 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.542145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.542186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.542479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.542507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.542858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.542887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.543249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.543282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.543537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.543565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.543921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.543949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.544321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.544351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.544725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.544753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.545119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.545154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.545582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.545938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.545968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.546325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.546712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.546739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.547085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.547113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.547477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.547507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.547868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.547896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.548262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.548292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.548659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.548688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.549047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.549075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.549318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.549351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.549657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.550021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.550050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.550411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.550442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.550799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.550829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.551185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.551215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.551461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.551490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.551862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.551891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.552239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.552269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.552517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.552549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.552886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.552915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.553279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.553309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.553672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.553700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.554066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.554094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.554494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.554523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.554891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.554929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.555318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.555681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.450 [2024-11-20 17:13:08.555709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.450 qpair failed and we were unable to recover it. 00:30:16.450 [2024-11-20 17:13:08.556075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.556103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.556474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.556505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.556844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.556875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.557235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.557266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.557642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.557671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.558032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.558061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.558492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.558521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.558886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.558916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.559275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.559305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.559672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.559700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.560054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.560083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.560439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.560468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.560828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.560869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.561229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.561260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.561639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.561668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.562027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.562056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.562429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.562459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.562850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.562879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.563248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.563280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.563639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.564031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.564059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.564490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.564519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.564873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.564903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.565275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.565305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.565752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.565780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.566109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.566138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.566596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.566627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.567022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.567274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.567308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.567714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.567745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.568116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.568147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.568519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.568550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.568908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.568937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.569313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.451 [2024-11-20 17:13:08.569344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.451 qpair failed and we were unable to recover it. 00:30:16.451 [2024-11-20 17:13:08.569710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.569738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.570100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.570131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.570535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.570566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.570935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.570964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.571311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.571343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.571762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.571798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.572134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.572175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.572529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.572558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.572891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.572922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.573287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.573318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.573694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.573722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.574004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.574033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.574378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.574408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.574780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.574809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.575183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.575214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.575574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.575603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.575977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.576006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.576418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.576794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.576823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.577200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.577232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.577640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.577669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.578052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.578081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.578356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.578386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.578792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.579178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.579209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.579475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.579507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.579875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.580376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.580408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.580812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.580841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.581072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.581107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.581511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.581543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.581785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.581813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.582203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.582235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.582630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.582659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.583031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.583062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.583322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.583353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.583733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.583764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.584176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.584585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.452 [2024-11-20 17:13:08.584615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.452 qpair failed and we were unable to recover it. 00:30:16.452 [2024-11-20 17:13:08.585015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.585045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.585389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.585419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.585745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.585774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.586149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.586195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.586567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.586596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.586963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.586992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.587357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.587390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.587655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.587690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.588048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.588078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.588423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.588454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.588812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.588842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.589224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.589254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.589640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.589669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.590083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.590353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.590384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.590753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.590784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.591129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.591183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.591592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.591989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.592020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.592382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.592416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.592796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.592826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.593121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.593151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.593523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.593554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.593892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.593923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.453 qpair failed and we were unable to recover it. 00:30:16.453 [2024-11-20 17:13:08.594288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.453 [2024-11-20 17:13:08.594320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.594593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.594624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.594985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.595016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.595363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.595394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.595740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.595770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.596136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.596177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.596557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.596585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.597054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.597443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.597476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.597848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.597878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.598251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.598282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.598631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.598659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.599080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.599108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.599518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.599550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.599896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.599927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.600191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.727 [2024-11-20 17:13:08.600222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.727 qpair failed and we were unable to recover it. 00:30:16.727 [2024-11-20 17:13:08.600464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.600496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.600871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.600901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.601151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.601194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.601572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.601603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.602031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.602060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.602321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.602353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.602742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.602771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.603016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.603050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.603437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.603470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.603816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.603845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.604302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.604333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.604618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.604646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.604996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.605027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.605463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.605494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.605866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.605895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.606243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.606272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.606507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.606541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.606921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.606952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.607284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.607314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.607707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.607736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.608114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.608143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.608529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.608558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.608930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.608963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.609219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.609250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.609618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.609648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.610006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.610035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.610416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.610450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.610836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.611217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.611249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.611668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.611700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.612052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.612084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.612376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.612408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.612650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.612681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.613029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.613057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.613478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.613509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.613847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.613882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.614219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.614253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.614611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.728 [2024-11-20 17:13:08.614641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.728 qpair failed and we were unable to recover it. 00:30:16.728 [2024-11-20 17:13:08.615006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.615034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.615247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.615275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.615673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.615702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.616075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.616106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.616473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.616504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.616873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.616903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.617258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.617287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.617670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.617701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.618096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.618123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.618555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.618586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.618950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.618980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.619244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.619276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.619660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.619690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.620051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.620080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.620439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.620468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.620842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.620874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.621223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.621257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.621608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.621637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.621994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.622403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.622437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.622786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.622825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.623209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.623241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.623640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.623669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.624022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.624051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.624404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.624434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.624694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.624724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.625096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.625127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.625432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.625462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.625867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.625897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.626156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.626202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.626663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.626693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.627068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.627097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.627473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.627503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.627848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.627877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.628249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.628281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.628639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.628669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.628924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.628957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.629249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.629281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.729 [2024-11-20 17:13:08.629652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.729 [2024-11-20 17:13:08.629683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.729 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.630060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.630500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.630538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.630886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.630915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.631325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.631356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.631772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.631800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.632231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.632261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.632558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.632587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.632948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.632976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.633334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.633369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.633735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.633764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.634125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.634155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.634578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.634611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.634954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.634984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.635266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.635533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.635566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.635937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.635966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.636215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.636529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.636558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.636822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.636850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.637205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.637234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.637588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.637618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.637996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.638025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.638235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.638264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.638694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.638722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.639093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.639121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.639534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.639563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.639992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.640034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.640297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.640327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.640701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.640729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.641103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.641131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.641502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.641531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.641772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.641800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.642171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.642201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.642558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.642588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.642850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.642881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.643250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.643282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.643632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.643662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.730 [2024-11-20 17:13:08.644013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.730 [2024-11-20 17:13:08.644041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.730 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.644431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.644461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.644850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.644879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.645125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.645157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.645461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.645489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.645830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.645858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.646115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.646142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.646441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.646471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.646823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.646852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.647108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.647137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.647519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.647549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.647905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.647932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.648364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.648396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.648785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.648813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.649205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.649583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.649612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.649964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.649993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.650250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.650279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.650549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.650577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.650814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.650842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.651089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.651116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.651516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.651547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.651912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.651941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.652295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.652325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.652681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.652709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.653124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.653152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.653562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.653591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.653958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.653986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.654344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.654374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.654733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.654761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.655125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.655170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.655546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.655575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.655827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.655855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.656219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.656249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.656631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.656660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.657041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.657072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.657437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.657466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.657713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.657744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.731 [2024-11-20 17:13:08.658094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.731 [2024-11-20 17:13:08.658123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.731 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.658308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.658337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.658775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.659053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.659082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.659361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.659392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.659685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.659713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.660067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.660097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.660519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.660551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.660948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.660977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.661343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.661373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.661737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.661766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.662143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.662180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.662621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.662650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.663014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.663044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.663491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.663521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.663875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.663905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.664249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.664279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.664677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.664707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.665077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.665105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.665499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.665538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.665814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.665843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.666303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.666333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.666717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.666745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.667098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.667126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.667495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.667526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.667891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.667920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.668197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.668228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.668583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.668613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.668965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.668993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.669383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.669413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.669755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.669784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.670146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.670203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.670622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.670651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.732 qpair failed and we were unable to recover it. 00:30:16.732 [2024-11-20 17:13:08.671013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.732 [2024-11-20 17:13:08.671043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.671456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.671487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.671908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.671936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.672281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.672312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.672715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.673003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.673030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.673244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.673274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.673668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.673697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.674057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.674086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.674330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.674360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.674757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.675050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.675078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.675489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.675519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.675898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.675926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.676340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.676371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.676741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.676770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.677137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.677177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.677623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.677653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.678008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.678037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.678445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.678474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.678830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.678859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.679109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.679137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.679390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.679790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.679820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.680182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.680212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.680594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.680622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.680870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.680899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.681242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.681278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.681650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.681680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.682046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.682075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.682215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.682245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.682548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.682577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.682918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.682948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.683344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.683374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.683700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.683730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.684005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.684396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.684427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.684890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.685236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.733 [2024-11-20 17:13:08.685274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.733 qpair failed and we were unable to recover it. 00:30:16.733 [2024-11-20 17:13:08.685660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.685688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.686055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.686084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.686404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.686434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.686806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.686835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.687090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.687119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.687503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.687533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.687789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.687817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.688176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.688206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.688641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.688670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.688927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.688955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.689325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.689355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.689726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.690126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.690154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.690522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.690551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.690919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.690948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.691300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.691336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.691673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.691703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.692061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.692090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.692473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.692503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.692981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.693011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.693266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.693298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.693651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.693679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.694073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.694102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.694491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.694522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.694892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.694920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.695265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.695295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.695665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.695695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.696062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.696091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.696378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.696409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.696782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.696812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.697058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.697090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.697460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.697496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.697766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.697796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.698149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.698190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.698578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.698607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.698974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.699002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.699377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.699408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.699798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.699827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.734 [2024-11-20 17:13:08.700203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.734 [2024-11-20 17:13:08.700233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.734 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.700603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.700639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.701004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.701032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.701393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.701424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.701780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.701808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.702190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.702606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.702636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.702881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.702915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.703293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.703323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.703703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.703731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.704078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.704108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.704380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.704410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.704785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.704815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.705193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.705224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.705586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.705616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.705984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.706013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.706378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.706409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.706737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.706767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.707116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.707151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.707511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.707540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.707910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.707947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.708300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.708329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.708676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.708707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.709055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.709083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.709514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.709544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.709878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.709907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.710342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.710372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.710732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.710760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.711112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.711141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.711536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.711566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.711911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.711940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.712309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.712340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.712736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.712764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.713026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.713058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.713415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.713446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.713813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.713842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.714191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.714221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.714623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.715003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.735 [2024-11-20 17:13:08.715032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.735 qpair failed and we were unable to recover it. 00:30:16.735 [2024-11-20 17:13:08.715386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.715416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.715757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.715789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.716150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.716190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.716570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.716599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.716935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.716965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.717332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.717363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.717715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.717757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.718192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.718223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.718589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.718619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.718976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.719003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.719343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.719373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.719739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.719767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.720125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.720154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.720503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.720535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.720886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.720913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.721278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.721309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.721739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.722122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.722149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.722527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.722556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.722898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.722935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.723226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.723256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.723646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.723675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.724041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.724070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.724429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.724459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.724822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.724851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.725254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.725284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.725636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.725665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.726029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.726056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.726400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.726430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.726682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.726711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.727049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.727078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.727445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.727476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.727823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.727852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.728221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.736 [2024-11-20 17:13:08.728251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.736 qpair failed and we were unable to recover it. 00:30:16.736 [2024-11-20 17:13:08.728659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.728689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.729050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.729079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.729466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.729498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.729878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.729907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.730050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.730082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.730477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.730507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.730872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.730901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.731235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.731264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.731615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.731645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.732008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.732038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.732393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.732424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.732786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.732814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.733064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.733093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.733344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.733381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.733741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.733770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.734140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.734183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.734539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.734568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.734932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.734960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.735308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.735338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.735739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.736124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.736153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.736471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.736500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.736868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.736897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.737234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.737266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.737617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.737645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.737891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.737922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.738274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.738304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.738664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.738694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.739065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.739094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.739462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.739491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.739833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.739863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.740243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.740273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.740639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.740667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.740919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.740948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.741301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.741331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.741697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.741727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.742058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.742086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.742419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.742450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.737 [2024-11-20 17:13:08.742822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.737 [2024-11-20 17:13:08.742850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.737 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.743215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.743246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.743605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.743634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.743972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.744000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.744348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.744377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.744747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.744776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.745144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.745200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.745587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.745616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.745973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.746000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.746381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.746411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.746776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.746804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.747172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.747202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.747567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.747595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.747964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.747994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.748376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.748406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.748753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.748780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.749051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.749080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.749347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.749381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.749772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.749800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.750178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.750209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.750604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.750633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.750976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.751005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.751394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.751424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.751775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.751804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.752175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.752206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.752586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.752614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.752885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.752914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.753262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.753291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.753635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.753666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.754027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.754057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.754395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.754426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.754857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.754886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.755075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.755106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.755516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.755546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.755907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.755935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.756298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.756330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.756699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.756729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.757088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.757117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.757497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.738 [2024-11-20 17:13:08.757528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.738 qpair failed and we were unable to recover it. 00:30:16.738 [2024-11-20 17:13:08.757882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.757911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.758277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.758307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.758660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.758688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.758945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.758974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.759338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.759374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.759730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.759758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.760004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.760035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.760429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.760459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.760820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.760849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.761282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.761313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.761658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.761687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.761923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.761951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.762234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.762262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.762632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.763038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.763066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.763424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.763456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.763820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.763848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.764214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.764245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.764613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.764641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.765006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.765034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.765392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.765422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.765798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.765827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.766198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.766227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.766565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.766594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.766957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.766985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.767363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.767392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.767763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.767792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.768135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.768177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.768554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.768584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.768939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.768967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.769329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.769359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.769732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.770130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.770189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.770544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.770572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.770912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.770941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.771194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.771227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.771569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.771597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.771965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.771993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.739 qpair failed and we were unable to recover it. 00:30:16.739 [2024-11-20 17:13:08.772379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.739 [2024-11-20 17:13:08.772410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.772805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.772835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.773191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.773221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.773576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.773606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.773964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.773993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.774240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.774274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.774656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.774685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.775032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.775070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.775429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.775459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.775824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.775852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.776216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.776246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.776602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.776631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.776994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.777024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.777388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.777420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.777780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.777809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.778168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.778566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.778596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.778948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.778976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.779320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.779350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.779703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.779732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.780072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.780100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.780427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.780458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.780795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.780825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.781185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.781217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.781587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.781986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.782014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.782388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.782419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.782810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.782840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.783216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.783245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.783601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.783630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.783995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.784025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.784388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.784418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.784795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.784823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.785192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.785221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.785576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.785611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.785965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.785994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.786352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.786382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.786750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.786780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.787141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.740 [2024-11-20 17:13:08.787182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.740 qpair failed and we were unable to recover it. 00:30:16.740 [2024-11-20 17:13:08.787576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.787604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.787899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.787927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.788293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.788324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.788690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.788718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.789084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.789112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.789464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.789494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.789739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.789770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.790121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.790149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.790514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.790544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.790930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.790958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.791222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.791252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.791640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.791669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.792041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.792069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.792324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.792353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.792702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.792731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.793084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.793113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.793461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.793492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.793849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.793877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.794242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.794272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.794645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.794674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.795042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.795070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.795420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.795451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.795809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.795841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.796098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.796128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.796566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.796911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.796940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.797304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.797333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.797698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.797726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.798090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.798121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.798456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.798488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.798903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.798932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2154250 Killed "${NVMF_APP[@]}" "$@" 00:30:16.741 [2024-11-20 17:13:08.799287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.799321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.741 [2024-11-20 17:13:08.799989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.741 [2024-11-20 17:13:08.800018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.741 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:16.742 [2024-11-20 17:13:08.800390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.800421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.742 [2024-11-20 17:13:08.800800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.800829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.742 [2024-11-20 17:13:08.801196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.801229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.742 [2024-11-20 17:13:08.801571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.801600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.801970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.802365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.802397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.802748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.802776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.803144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.803189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.803557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.803595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.803955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.803984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.804335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.804365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.804721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.804750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.805007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.805038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.805392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.805424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.805777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.805807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.806184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.806214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.806567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.806594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.806964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.806992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.807247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.807276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.807637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.807667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.808034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.808062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.808330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.808360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.808740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.808768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.809151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.809195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2155120 00:30:16.742 [2024-11-20 17:13:08.809490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.809520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.809784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2155120 00:30:16.742 [2024-11-20 17:13:08.809814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.810178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2155120 ']' 00:30:16.742 [2024-11-20 17:13:08.810209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.742 [2024-11-20 17:13:08.810626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.810656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.742 [2024-11-20 17:13:08.811007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.811036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.742 [2024-11-20 17:13:08.811387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 17:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.742 [2024-11-20 17:13:08.811417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.811784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.811815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.742 [2024-11-20 17:13:08.812200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.742 [2024-11-20 17:13:08.812232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.742 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.812493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.812522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.812905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.812936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.813376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.813409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.813737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.813772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.814139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.814181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.814506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.814538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.814912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.814941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.815314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.815345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.815594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.815628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.815985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.816015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.816272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.816304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.816683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.816712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.816960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.816991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.817238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.817271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.817570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.817601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.817806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.817835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.818245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.818276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.818683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.818715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.819078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.819108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.819480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.819514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.819887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.819918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.820288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.820323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.820684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.820714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.821049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.821079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.821431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.821461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.821620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.821654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.822025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.822057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.822416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.822448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.822803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.822834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.823247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.823280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.823509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.823542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.823798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.823828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.824087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.824119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.824458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.824823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.824855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.825198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.825232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.825601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.825631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.825993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.743 [2024-11-20 17:13:08.826024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.743 qpair failed and we were unable to recover it. 00:30:16.743 [2024-11-20 17:13:08.826288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.826320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.826561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.826590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.826956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.826987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.827365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.827398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.827770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.827800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.828087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.828118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.828409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.828447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.828809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.828839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.829089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.829120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.829510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.829876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.829906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.830141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.830565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.830596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.830964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.830995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.831336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.831367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.831735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.831766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.832022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.832052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.832416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.832447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.832820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.832849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.833218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.833248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.833616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.833647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.834049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.834080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.834461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.834493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.834867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.834898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.835281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.835314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.835692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.835728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.836087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.836119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.836481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.836512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.836858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.836887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.837236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.837267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.837620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.837651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.838025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.838054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.838457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.838488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.838809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.838837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.839120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.839149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.839602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.839635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.839857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.839885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.840390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.840423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.744 qpair failed and we were unable to recover it. 00:30:16.744 [2024-11-20 17:13:08.840855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.744 [2024-11-20 17:13:08.840885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.841246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.841276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.841680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.841966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.841997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.842345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.842379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.842610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.842639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.843015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.843047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.843427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.843458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.843883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.843913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.844289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.844321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.844694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.844726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.845096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.845124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.845411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.845441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.845769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.845798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.846051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.846080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.846465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.846495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.846873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.846903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.847255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.847288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.847670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.848089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.848117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.848390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.848421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.848794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.848824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.849211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.849242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.849614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.849645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.850040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.850069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.850340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.850369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.850734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.850763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.851140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.851181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.851555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.851585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.851957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.851989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.852243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.852273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.852628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.852658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.852913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.852942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.853380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.745 [2024-11-20 17:13:08.853410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.745 qpair failed and we were unable to recover it. 00:30:16.745 [2024-11-20 17:13:08.853787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.853824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.854216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.854246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.854505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.854540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.854936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.854968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.855226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.855255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.855663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.855692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.856068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.856099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.856380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.856412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.856768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.856799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.857178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.857209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.857614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.857641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.857898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.857933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.858311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.858343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.858715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.858744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.859008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.859036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.859306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.859335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.859729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.859758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.860131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.860176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.860560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.860590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.860963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.860995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.861262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.861293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.861682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.861711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.862099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.862129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.862527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.862558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.862918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.862946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.863325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.863359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.863742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.863773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.864124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.864189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.864560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.864589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.864839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.864871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.865270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.865302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.865598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.865627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.865991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.866020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.866421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.866802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.866830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.867206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.867239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.867608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.867636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.868021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.868055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.746 qpair failed and we were unable to recover it. 00:30:16.746 [2024-11-20 17:13:08.868412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.746 [2024-11-20 17:13:08.868443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 [2024-11-20 17:13:08.868438] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.868500] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.747 [2024-11-20 17:13:08.868818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.868850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.869104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.869132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.869392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.869423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.869847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.870237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.870271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.870531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.870566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.870932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.870962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.871341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.871374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.871755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.871786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.872150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.872231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.872604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.872637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.872904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.872940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.873195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.873583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.873616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.873990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.874020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.874418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.874453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.874834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.875106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.875138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.875526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.875560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.875853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.876242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.876275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.876672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.876701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.877135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.877178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.877431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.877460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.877842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.877871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.878247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.878646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.878676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.878809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.878838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.879190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.879220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.879609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.880004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.880034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.880283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.880314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.880692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.880721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.881100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.881130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.881540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.881928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.881958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.882222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.747 [2024-11-20 17:13:08.882253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.747 qpair failed and we were unable to recover it. 00:30:16.747 [2024-11-20 17:13:08.882625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.882656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.882915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.882943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.883326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.883356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.883597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.883631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.884016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.884047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.884318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.884349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.884773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.884803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.885183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.885214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.885458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.885488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.885848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.885877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.886257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.886289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.886673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.886703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.887079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.887108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.887522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.887553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.887925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.887954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.888224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.748 [2024-11-20 17:13:08.888255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:16.748 qpair failed and we were unable to recover it. 00:30:16.748 [2024-11-20 17:13:08.888618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.028 [2024-11-20 17:13:08.888646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.028 qpair failed and we were unable to recover it. 00:30:17.028 [2024-11-20 17:13:08.888922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.028 [2024-11-20 17:13:08.888952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.028 qpair failed and we were unable to recover it. 00:30:17.028 [2024-11-20 17:13:08.889216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.028 [2024-11-20 17:13:08.889247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.028 qpair failed and we were unable to recover it. 00:30:17.028 [2024-11-20 17:13:08.889626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.028 [2024-11-20 17:13:08.889656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.028 qpair failed and we were unable to recover it. 00:30:17.028 [2024-11-20 17:13:08.889889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.028 [2024-11-20 17:13:08.889923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.028 qpair failed and we were unable to recover it. 00:30:17.028 [2024-11-20 17:13:08.890288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.890319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.890577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.890606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.890850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.890878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.891243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.891275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.891689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.891719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.891949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.891977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.892216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.892246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.892509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.892538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.892902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.892930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.893314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.893345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.893713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.893742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.894109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.894138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.894512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.894542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.894675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.894702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.894947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.894977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.895195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.895224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.895632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.895662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.896026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.896056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.896425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.896456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.896835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.897124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.897153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.897532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.897562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.897823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.897855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.898148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.898573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.898603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.898990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.899019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.899390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.899420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.899788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.899816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.900199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.900229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.900600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.900628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.901011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.901041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.901480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.901510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.901823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.901852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.902227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.902257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.902630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.902659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.903025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.903054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.903447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.029 [2024-11-20 17:13:08.903477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.029 qpair failed and we were unable to recover it. 00:30:17.029 [2024-11-20 17:13:08.903855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.903883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.904142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.904196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.904571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.904600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.904954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.904990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.905437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.905469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.905824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.905854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.906138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.906178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.906432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.906461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.906852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.907231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.907261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.907642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.907671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.908033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.908061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.908441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.908470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.908855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.908884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.909267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.909297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.909623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.909653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.909892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.909922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.910372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.910404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.910781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.910812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.911199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.911229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.911539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.911567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.911915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.911943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.912318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.912348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.912729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.912760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.913122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.913151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.913398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.913432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.913778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.913807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.914187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.914217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.914614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.914645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.915009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.915040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.915406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.915443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.915839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.915868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.916240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.916270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.916530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.916557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.916797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.916829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.917213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.917244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.917625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.917655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.917896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.917925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.030 [2024-11-20 17:13:08.918304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.030 [2024-11-20 17:13:08.918336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.030 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.918716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.918744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.919092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.919120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.919492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.919523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.919907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.919936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.920290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.920323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.920459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.920490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.920844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.920873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.921240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.921270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.921538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.921566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.921974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.922003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.922250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.922280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.922661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.922690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.922915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.922942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.923342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.923372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.923744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.923772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.924052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.924080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.924535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.924566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.924945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.924974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.925333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.925365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.925740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.925770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.926143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.926184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.926410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.926440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.926832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.926862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.927228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.927259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.927579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.927609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.927980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.928010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.928376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.928407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.928796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.929186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.929219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.929596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.929625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.930015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.930044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.930391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.930420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.930643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.930683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.931058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.931087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.931348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.931377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.931772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.931801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.932200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.932230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.031 [2024-11-20 17:13:08.932635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.031 qpair failed and we were unable to recover it. 00:30:17.031 [2024-11-20 17:13:08.933012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.933040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.933381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.933412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.933770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.933799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.934146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.934186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.934543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.934571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.934810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.934838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.935210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.935240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.935606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.935635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.936049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.936080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.936302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.936332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.936695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.936725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.937071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.937100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.937489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.937519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.937889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.937918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.938267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.938298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.938663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.938692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.939066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.939096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.939462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.939491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.939852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.939881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.940243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.940614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.940645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.941014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.941049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.941394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.941425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.941795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.941824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.942136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.942178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.942541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.942569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.942933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.942962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.943331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.943361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.943697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.943725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.944091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.944119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.944508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.944548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.944803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.944831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.945198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.945228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.945472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.945500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.945860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.945888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.946171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.946205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.946573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.946602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.946962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.946992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.032 [2024-11-20 17:13:08.947282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.032 [2024-11-20 17:13:08.947312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.032 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.947614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.947642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.948031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.948061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.948311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.948341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.948701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.948729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.949100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.949129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.949510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.949541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.949912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.949941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.950322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.950352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.950583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.950612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.950941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.951309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.951341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.951718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.951747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.952114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.952145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.952615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.952645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.953002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.953031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.953450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.953859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.953887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.954270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.954611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.954639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.955014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.955045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.955410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.955441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.955809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.955837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.956202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.956231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.956528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.956563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.956904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.956933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.957305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.957336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.957717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.957746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.958109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.958139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.958390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.958420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.958720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.958748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.959131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.959173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.033 [2024-11-20 17:13:08.959562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.033 [2024-11-20 17:13:08.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.033 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.959935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.959965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.960314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.960345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.960743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.960772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.961171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.961201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.961568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.961599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.961977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.962007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.962339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.962369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.962596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.962625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.962993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.963023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.963262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.963291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.963696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.963725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.964101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.964130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.964558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.964589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.964966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.964995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.965345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.965376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.965754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.965783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.966148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.966192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.966538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.966567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.966948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.966985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.967354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.967384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.967764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.967792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.968015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.968044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.968379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.968408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.968760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.968789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.969147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.969194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.969420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.969448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.969700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.969729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.970114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.970479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.970509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.970906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.971271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.971303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.971653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.971682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.972072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.972102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.972241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.972272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.972632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.972660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.972909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.034 [2024-11-20 17:13:08.973032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.973067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.973456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.973486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.973709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.034 [2024-11-20 17:13:08.973738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.034 qpair failed and we were unable to recover it. 00:30:17.034 [2024-11-20 17:13:08.974104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.974133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.974506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.974536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.974900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.974928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.975305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.975335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.975697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.975727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.976099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.976130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.976503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.976897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.976932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.977326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.977357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.977736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.977764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.978138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.978179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.978527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.978557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.978777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.978807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.979073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.979105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.979500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.979531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.979895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.979924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.980294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.980326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.980667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.980697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.981020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.981049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.981430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.981461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.981807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.981838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.982213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.982244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.982649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.982678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.983086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.983116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.983484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.983514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.983776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.983805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.984062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.984093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.984357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.984390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.984765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.984796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.985179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.985210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.985583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.985612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.985999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.986028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.986381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.986411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.986792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.986821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.987196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.987603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.987634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.988016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.988045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.988403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.035 [2024-11-20 17:13:08.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.035 qpair failed and we were unable to recover it. 00:30:17.035 [2024-11-20 17:13:08.988707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.988734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.989106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.989134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.989453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.989490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.989874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.989904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.990294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.990324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.990711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.990739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.991112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.991140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.991536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.991939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.992292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.992323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.992712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.992750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.993115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.993145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.993432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.993461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.993830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.993859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.994094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.994122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.994379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.994409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.994764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.994793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.995200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.995232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.995615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.995645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.995925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.995954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.996347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.996737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.996766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.997142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.997183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.997527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.997557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.997942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.997971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.998314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.998344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.998691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.998720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.999070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.999477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.999507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:08.999858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:08.999887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.000136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.000181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.000546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.000575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.000931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.000959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.001200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.001230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.001595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.001623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.001996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.002025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.002399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.002429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.002790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.002826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.036 [2024-11-20 17:13:09.003087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.036 [2024-11-20 17:13:09.003121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.036 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.003412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.003444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.003827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.003857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.004218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.004248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.004618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.004647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.005022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.005052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.005451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.005482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.005787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.005816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.006135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.006174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.006529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.006560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.006904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.006932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.007288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.007318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.007676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.007705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.008064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.008094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.008279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.008309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.008664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.008695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.009055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.009084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.009309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.009342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.009694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.009723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.010074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.010103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.010470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.010501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.010853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.010882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.011325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.011355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.011704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.011735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.012103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.012133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.012407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.012437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.012684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.012712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.013070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.013100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.013476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.013507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.013870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.013901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.014243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.014274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.014633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.014662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.015015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.015045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.015397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.015793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.015821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.016184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.016215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.016607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.016636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.017004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.017034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.017405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.037 [2024-11-20 17:13:09.017434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.037 qpair failed and we were unable to recover it. 00:30:17.037 [2024-11-20 17:13:09.017802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.017832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.018228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.018267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.018644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.018673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.019029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.019058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.019193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.019228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.019583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.019611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.019968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.019997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.020388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.020419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.020788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.020818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.021182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.021213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.021575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.021604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.021826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.021853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.022217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.022248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.022638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.023004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.023034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.023301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.023331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.023678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.023707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.024013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.024042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.024372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.024404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.024656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.024688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.025054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.025084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.025459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.025490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.025732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.025762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.025995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.026030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.026365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.026380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.038 [2024-11-20 17:13:09.026395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 [2024-11-20 17:13:09.026424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.026434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.038 [2024-11-20 17:13:09.026442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.038 [2024-11-20 17:13:09.026448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.038 [2024-11-20 17:13:09.026792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.026821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.027181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.027226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.027580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.027609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.027990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.028018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.028386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.028513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:17.038 [2024-11-20 17:13:09.028676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.038 [2024-11-20 17:13:09.028708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.038 qpair failed and we were unable to recover it. 00:30:17.038 [2024-11-20 17:13:09.028675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:17.038 [2024-11-20 17:13:09.028804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:17.038 [2024-11-20 17:13:09.028805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:17.038 [2024-11-20 17:13:09.029084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.029113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.029295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.029325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.029665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.029695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.030051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.030081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.030424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.030455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.030819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.030849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.031222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.031252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.031530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.031558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.031922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.031951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.032314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.032346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.032720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.032750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.033098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.033127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.033558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.033589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.033942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.033972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.034225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.034258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.034627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.034656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.034911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.034940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.035305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.035334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.035694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.035723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.036093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.036123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.036525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.036556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.036911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.036946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.037321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.037350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.037731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.037759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.038119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.038149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.038513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.038543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.038886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.038915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.039274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.039306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.039675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.039704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.039963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.039992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.040345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.040375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.040733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.040762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.041130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.041175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.041532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.041560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.041912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.041943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.042311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.042343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.042688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.042719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.043075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.043105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.039 [2024-11-20 17:13:09.043473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.039 [2024-11-20 17:13:09.043504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.039 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.043866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.043896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.044276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.044307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.044655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.044686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.045072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.045100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.045347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.045733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.045762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.046117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.046147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.046575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.046605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.046977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.047015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.047429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.047459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.047807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.047836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.048214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.048244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.048562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.048592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.048955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.048984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.049323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.049355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.049555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.049585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.049912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.049940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.050314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.050344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.050690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.050721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.050960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.050988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.051237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.051268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.051566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.051595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.051955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.051984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.052213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.052251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.052549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.052577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.052941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.052971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.053328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.053360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.053717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.053746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.053993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.054021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.054431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.054460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.054833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.054863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.055125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.055153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.055454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.055485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.055724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.055752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.056003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.056033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.056409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.056656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.056684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.040 [2024-11-20 17:13:09.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.040 [2024-11-20 17:13:09.057081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.040 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.057476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.057505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.057765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.057793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.058138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.058179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.058531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.058562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.058926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.058955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.059254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.059283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.059619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.059648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.060013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.060041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.060424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.060456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.060818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.060849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.061120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.061148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.061549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.061579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.061791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.061824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.062198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.062230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.062473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.062502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.062725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.062756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.063102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.063132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.063525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.063557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.063798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.064115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.064145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.064425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.064458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.064688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.064719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.064938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.064968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.065315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.065347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.065669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.065700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.066051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.066426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.066460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.066686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.067062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.067092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.067311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.067342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.067574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.067605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.067960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.067992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.068352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.068383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.068729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.068760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.068872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.068903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.069251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.069283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.069514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.069543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.069914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.041 [2024-11-20 17:13:09.069944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.041 qpair failed and we were unable to recover it. 00:30:17.041 [2024-11-20 17:13:09.070310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.070342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.070715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.070745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.071074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.071103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.071528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.071560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.071805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.071835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.072226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.072258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.072579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.072608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.072873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.072902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.073235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.073268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.073607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.073638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.073857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.073887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.074253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.074284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.074635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.074671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.074899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.074928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.075261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.075290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.075529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.075564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.075879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.075910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.076176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.076207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.076509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.076539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.076887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.076916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.077276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.077307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.077701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.077731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.077994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.078026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.078396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.078426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.078773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.079198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.079229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.079580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.079612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 [2024-11-20 17:13:09.079718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.042 [2024-11-20 17:13:09.079748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.042 qpair failed and we were unable to recover it. 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Write completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Write completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Read completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Write completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.042 Write completed with error (sct=0, sc=8) 00:30:17.042 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Write completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Write completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Write completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Write completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 Read completed with error (sct=0, sc=8) 00:30:17.043 starting I/O failed 00:30:17.043 [2024-11-20 17:13:09.080611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.043 [2024-11-20 17:13:09.081128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.081204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.081680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.081783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.082183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.082217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.082564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.082596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.082967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.082997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.083359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.083391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.083728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.083756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.084119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.084156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.084505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.084544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.084900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.084929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.085207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.085238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.085603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.085631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.085872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.085902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.086297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.086327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.086667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.086697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.087060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.087089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.087458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.087491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.087829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.087858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.087991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.088020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.088263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.088292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.088658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.089035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.089065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.089449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.089480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.089738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.089985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.090014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.090387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.090418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.090749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.090780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.091156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.091201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.091644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.091675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.091791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.091823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.092151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.092196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.092542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.092570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.092831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.092861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.043 [2024-11-20 17:13:09.093214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.043 [2024-11-20 17:13:09.093247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.043 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.093612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.093642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.093920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.093952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.094293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.094324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.094669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.094699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.094939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.094969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.095349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.095378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.095722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.095757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.096002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.096033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.096262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.096296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.096658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.096688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.096928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.096958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.097322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.097354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.097749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.097778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.098045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.098077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.098421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.098460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.098801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.099186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.099216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.099487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.099517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.099834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.099863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.100225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.100256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.100617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.100645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.101014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.101046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.101394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.101425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.101777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.101808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.102183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.102213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.102465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.102495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.102906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.102935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.103194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.103225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.103505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.103535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.103784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.103814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.104251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.104573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.104602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.104987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.105016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.105377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.105410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.105771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.105800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.106004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.106032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.106404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.106434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.106802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.106831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.044 qpair failed and we were unable to recover it. 00:30:17.044 [2024-11-20 17:13:09.107077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.044 [2024-11-20 17:13:09.107107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.107503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.107535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.107875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.107907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.108279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.108318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.108723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.109048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.109077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.109443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.109474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.109828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.109860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.110201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.110231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.110601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.110630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.111024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.111054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.111396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.111427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.111567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.111597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.111820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.111850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.112214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.112246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.112584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.112614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.112966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.112996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.113222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.113253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.113601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.113630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.113987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.114016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.114278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.114307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.114650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.114679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.114920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.114954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.115302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.115334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.115570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.115600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.115973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.116003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.116370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.116402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.116775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.116804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.117145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.117203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.117566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.117596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.117845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.117873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.118231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.118262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.118604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.118634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.118852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.118880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.119283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.119314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.119530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.119561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.119886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.119917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.120202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.120232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.120667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.045 [2024-11-20 17:13:09.120696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.045 qpair failed and we were unable to recover it. 00:30:17.045 [2024-11-20 17:13:09.121049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.121079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.121366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.121585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.121616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.121976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.122006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.122383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.122415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.122749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.122785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.123094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.123124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.123498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.123530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.123906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.123935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.124312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.124347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.124662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.124691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.125033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.125063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.125295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.125325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.125713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.125742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.126099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.126129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.126508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.126539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.126788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.126817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.127148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.127197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.127520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.127549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.127867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.127897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.128130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.128181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.128588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.128618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.128827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.128856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.129139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.129184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.129568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.129602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.129953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.129985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.130218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.130248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.130611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.130639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.130859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.130888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.131287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.131320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.131693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.131723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.131999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.132030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.132255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.132287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.132653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.132683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.133062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.133094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.133457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.133488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.133698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.133728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.134111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.134143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.046 [2024-11-20 17:13:09.134482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.046 [2024-11-20 17:13:09.134522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.046 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.134732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.134761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.135008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.135039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.135383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.135415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.135783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.135812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.136185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.136228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.136595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.136624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.136970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.136999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.137385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.137416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.137628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.137656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.137874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.137905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.138048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.138082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.138306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.138338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.138738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.138769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.139124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.139155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.139544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.139577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.139921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.139952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.140339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.140370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.140729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.140758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.141130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.141172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.141498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.141526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.141890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.141919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.142138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.142181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.142569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.142598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.142963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.142993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.143221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.143250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.143625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.143654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.144008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.144038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.144304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.144335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.144725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.144755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.144992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.145022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.145402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.145434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.145750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.145779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.145998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.146027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.047 qpair failed and we were unable to recover it. 00:30:17.047 [2024-11-20 17:13:09.146259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.047 [2024-11-20 17:13:09.146288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.146684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.146720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.146931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.146959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.147330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.147361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.147614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.147643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.147888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.147917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.148281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.148312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.148702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.148731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.149095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.149125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.149513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.149545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.149946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.149975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.150336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.150367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.150731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.150760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.151138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.151184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.151520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.151550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.151914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.151943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.152311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.152343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.152573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.152602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.152768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.152798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.153215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.153247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.153487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.153515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.153888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.153916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.154274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.154304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.154665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.154694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.155039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.155068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.155293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.155323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.155746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.155775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.156099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.156128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.156391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.156422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.156680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.156709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.156944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.156979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.157225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.157256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.157605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.157634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.157972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.158002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.158402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.158433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.158780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.158811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.159196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.159228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.159593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.159622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.159996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.048 [2024-11-20 17:13:09.160025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.048 qpair failed and we were unable to recover it. 00:30:17.048 [2024-11-20 17:13:09.160306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.160336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.160741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.160769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.161197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.161228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.161588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.161626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.162001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.162031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.162396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.162428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.162680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.162709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.163005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.163033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.163397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.163428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.163781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.163811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.164182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.164215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.164425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.164454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.164773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.164804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.165148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.165196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.165566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.165594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.165963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.165992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.166330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.166361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.166723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.166753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.167115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.167144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.167514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.167545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.167953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.167981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.168183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.168214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.168558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.168588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.168966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.168994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.169340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.169370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.169598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.169625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.169978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.170007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.170392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.170422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.170826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.170853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.171213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.171243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.171467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.171503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.171866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.171896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.172254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.172284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.172624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.172653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.173045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.173074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.173433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.173464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.173856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.173885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.174242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.174273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.049 [2024-11-20 17:13:09.174659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.049 [2024-11-20 17:13:09.174690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.049 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.175010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.175038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.175449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.175478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.175829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.175860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.176233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.176262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.176607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.176635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.177011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.177042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.177413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.177442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.177783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.177812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.178183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.178215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.178439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.178468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.178859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.178887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.179138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.179204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.179435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.179464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.179684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.179712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.179939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.179966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.180234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.180267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.180601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.180630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.181007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.181038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.181423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.181455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.181830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.181859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.181998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.050 [2024-11-20 17:13:09.182389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.050 [2024-11-20 17:13:09.182419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.050 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.182799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.182831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.183193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.183226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.183554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.183586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.183930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.183959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.184210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.184240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.184600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.184949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.184977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.185339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.185368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.185732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.185763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.187821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.187892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.188189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.188237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.188605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.188635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.188864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.188892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.189253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.189285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.189508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.189536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.189902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.189931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.190302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.190333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.190587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.190615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.190966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.190997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.191207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.191237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.191437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.191467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.353 [2024-11-20 17:13:09.191784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.353 qpair failed and we were unable to recover it. 00:30:17.353 [2024-11-20 17:13:09.192034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.192063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.192443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.192472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.192778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.192810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.193037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.193066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.193459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.193489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.193852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.193882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.194245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.194276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.194530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.194557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.194909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.194937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.195308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.195340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.195698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.195728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.196125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.196154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.196419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.196451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.196841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.196870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.197227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.197258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.197646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.197683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.197896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.197926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.198285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.198315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.198703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.198732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.198940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.198968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.199314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.199344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.199585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.199614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.199894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.199924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.200250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.200280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.200643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.200674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.201042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.201072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.201450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.201481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.201846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.201875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.202256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.202286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.202669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.202699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.202915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.354 [2024-11-20 17:13:09.202945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.354 qpair failed and we were unable to recover it. 00:30:17.354 [2024-11-20 17:13:09.203287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.203317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.203664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.203693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.204040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.204069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.204301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.204330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.204543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.204571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.204935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.204964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.205202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.205233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.205577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.205608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.205834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.205863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.206115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.206144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.206562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.206593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.206812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.206840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.207225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.207257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.207615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.207644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.207970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.207997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.208355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.208386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.208756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.208784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.209145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.209187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.209560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.209589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.209933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.209964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.210311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.210343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.210550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.210579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.210926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.210955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.211316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.211346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.211711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.211741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.212070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.212106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.212346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.212377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.212472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.212502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.212844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.212874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.213140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.213194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.213547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.355 [2024-11-20 17:13:09.213576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.355 qpair failed and we were unable to recover it. 00:30:17.355 [2024-11-20 17:13:09.213934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.213962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.214336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.214366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.214728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.214756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.215147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.215195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.215545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.215573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.215801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.215829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.216219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.216250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.216609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.216638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.217006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.217035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.217382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.217622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.217649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.217976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.218005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.218395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.218425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.218778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.218806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.219191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.219221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.219581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.219610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.219984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.220012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.220360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.220391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.220764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.220794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.221177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.221207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.221422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.221450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.221817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.221852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.222217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.222247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.222635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.222665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.223025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.223054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.223394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.223424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.223802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.223832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.224198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.224227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.224594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.224623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.224955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.224984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.225358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.356 [2024-11-20 17:13:09.225388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.356 qpair failed and we were unable to recover it. 00:30:17.356 [2024-11-20 17:13:09.225657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.226050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.226079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.226438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.226468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.226681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.226710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.227085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.227115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.227489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.227519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.227900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.227928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.228151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.228195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.228569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.228598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.228906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.228936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.229317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.229348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.229688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.229718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.229928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.229957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.230303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.230332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.230724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.230752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.230987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.231016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.231415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.231445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.231676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.231704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.232081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.232111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.232353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.232383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.232738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.232767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.232990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.233019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.233360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.233391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.233757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.233785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.234026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.234053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.234437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.234466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.234853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.234882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.235242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.235273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.235604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.235633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.236002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.236032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.357 qpair failed and we were unable to recover it. 00:30:17.357 [2024-11-20 17:13:09.236377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.357 [2024-11-20 17:13:09.236406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.236781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.236815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.237060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.237433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.237464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.237827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.237856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.238212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.238242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.238585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.238614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.238843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.238870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.239244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.239275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.239435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.239464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.239803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.239831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.240187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.240219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.240584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.240612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.240990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.241020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.241381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.241412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.241768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.241797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.242155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.242196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.242601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.242630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.242962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.242989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.243363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.243393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.243749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.243778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.244131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.244170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.244519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.244548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.244865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.244894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.245269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.245300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.245645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.245675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.358 qpair failed and we were unable to recover it. 00:30:17.358 [2024-11-20 17:13:09.246030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.358 [2024-11-20 17:13:09.246058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.246396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.246426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.246808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.246837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.247188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.247219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.247439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.247468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.247836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.247864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.248269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.248610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.248640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.248996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.249024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.249375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.249405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.249779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.249807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.249928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.249961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.250355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.250384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.250696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.250725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.251106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.251135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.251485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.251515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.251897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.251927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.252141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.252505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.252534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.252747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.252775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.253155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.253210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.253565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.253594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.253951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.254371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.254402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.254750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.254778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.255018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.255046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.255418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.255449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.255695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.255723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.256080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.256108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.256358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.256388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.359 [2024-11-20 17:13:09.256746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.359 [2024-11-20 17:13:09.256776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.359 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.256998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.257027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.257235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.257265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.257624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.257654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.258014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.258043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.258378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.258409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.258769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.258798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.259155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.259196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.259551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.259579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.259939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.259967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.260337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.260367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.260549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.260577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.260912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.261273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.261310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.261540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.261569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.261938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.261968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.262322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.262353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.262715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.262743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.263111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.263139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.263481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.263511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.263859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.263887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.264251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.264281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.264643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.264672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.265033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.265064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.265450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.265480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.265820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.266086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.266116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.266537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.266569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.266805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.266834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.267208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.267239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.267493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.267521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.267743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.267774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.268052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.268085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.268468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.360 [2024-11-20 17:13:09.268498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.360 qpair failed and we were unable to recover it. 00:30:17.360 [2024-11-20 17:13:09.268825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.268857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.269205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.269236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.269508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.269537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.269632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.269660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.270005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.270035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.270388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.270419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.270791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.270820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.271059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.271089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.271468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.271499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.271835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.271864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.271985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.272018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.272350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.272381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.272672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.272702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.273066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.273095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.273444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.273476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.273684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.273713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.274076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.274105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.274445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.274476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.274820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.274849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.275210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.275240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.275614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.275643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.276000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.276030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.276446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.276782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.276811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.277180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.277210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.277504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.277532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.277633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.277663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.277969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.278175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.278209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.278657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.278763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.279198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.279240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.279624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.279654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.280027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.361 [2024-11-20 17:13:09.280056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.361 qpair failed and we were unable to recover it. 00:30:17.361 [2024-11-20 17:13:09.280433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.280464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.280828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.280858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.281209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.281240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.281498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.281527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.281895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.281925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.282300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.282330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.282687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.282719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.283069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.283099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.283379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.283411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.283690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.283724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.283975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.284005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.284372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.284403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.284698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.284726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.285087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.285115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.285522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.285554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.285797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.285826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.286180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.286211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.286563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.286592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.286972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.287000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.287221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.287252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.287688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.287718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.287961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.287989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.288215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.288244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.288472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.288500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.288850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.288880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.289239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.289269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.289589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.289618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.289993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.290029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.290383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.290412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.290654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.290683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.291054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.291083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.291423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.362 [2024-11-20 17:13:09.291453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.362 qpair failed and we were unable to recover it. 00:30:17.362 [2024-11-20 17:13:09.291549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.291578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.292130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.292256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.292625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.293010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.293041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.293519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.293623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.294029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.294067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.294416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.294450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.294662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.294690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.294993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.295023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.295259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.295291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.295652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.295682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.295901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.295930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.296179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.296211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.296441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.296471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.296850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.296881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.297246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.297279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.297633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.297662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.297945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.297975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.298336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.298726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.298754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.299149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.299191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.299608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.299638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.299982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.300014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.300361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.300393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.300786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.300816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.301181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.301214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.301539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.301569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.301926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.301956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.302216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.302248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.302587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.302619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.302979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.303008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.303343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.303374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.303734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.363 [2024-11-20 17:13:09.303763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.363 qpair failed and we were unable to recover it. 00:30:17.363 [2024-11-20 17:13:09.304113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.304142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.304379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.304408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.304648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.304688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.305082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.305111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.305493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.305523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.305883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.305912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.306281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.306313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.306682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.306711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.307061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.307091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.307472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.307504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.307752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.307781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.308145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.308185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.308553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.308583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.308949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.308977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.309333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.309364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.309735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.309764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.310126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.310155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.310437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.310467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.310801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.311180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.311211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.311422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.311452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.311828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.311856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.312219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.312251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.312478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.312506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.312877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.312907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.313281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.313313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.364 qpair failed and we were unable to recover it. 00:30:17.364 [2024-11-20 17:13:09.313555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.364 [2024-11-20 17:13:09.313586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.313688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.313718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.314094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.314324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.314358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.314629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.314665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.315042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.315072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.315293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.315325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.315464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.315493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.315998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.316108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.316676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.316786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.317459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.317567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc290c0 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.317966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.317999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.318242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.318637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.318667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.318894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.319300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.319332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.319582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.319619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.319848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.319876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.320288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.320320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.320687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.320715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.321082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.321111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.321451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.321482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.321858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.321887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.321982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.322011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.322258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.322288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.322513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.322542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.322922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.322950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.323328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.323734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.323763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.324124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.365 [2024-11-20 17:13:09.324153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.365 qpair failed and we were unable to recover it. 00:30:17.365 [2024-11-20 17:13:09.324544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.324576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.324903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.324933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.325287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.325319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.325671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.325702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.326073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.326101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.326333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.326363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.326752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.326781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.327208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.327239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.327599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.327634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.328005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.328035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.328434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.328465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.328830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.328861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.329155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.329196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.329417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.329446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.329806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.329835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.330196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.330228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.330636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.330666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.330991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.331022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.331285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.331316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.331733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.331761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.332029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.332062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.332274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.332304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.332637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.332666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.332874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.332902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.333286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.333317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.333683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.333714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.334071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.334108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.334394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.334427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.334759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.334801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.366 qpair failed and we were unable to recover it. 00:30:17.366 [2024-11-20 17:13:09.335132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.366 [2024-11-20 17:13:09.335172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.335385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.335414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.335794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.335824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.336181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.336212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.336542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.336571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.336948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.336977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.337337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.337367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.337729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.337759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.338116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.338146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.338495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.338524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.338892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.338921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.339156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.339208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.339417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.339447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.339804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.339834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.339934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.339965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.340079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.340113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.340484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.340517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.340738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.340766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.341116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.341148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.341515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.341545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.341921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.341950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.342314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.342346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.342728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.342757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.343002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.343033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.343273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.343308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.343685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.343715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.344080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.344110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.344459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.344489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.344854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.344883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.345240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.367 [2024-11-20 17:13:09.345271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.367 qpair failed and we were unable to recover it. 00:30:17.367 [2024-11-20 17:13:09.345653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.345682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.346043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.346072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.346310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.346342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.346720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.346750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.347115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.347144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.347528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.347557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.347808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.347837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.348201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.348239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.348587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.348618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.348987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.349018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.349382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.349414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.349770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.349800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.350179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.350213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.350586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.350616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.350985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.351016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.351374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.351408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.351770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.351800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.352020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.352048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.352431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.352462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.352803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.352834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.353230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.353260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.353602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.353632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.353855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.353886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.354263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.354294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.354668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.354697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.355055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.355085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.355423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.355454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.355848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.355879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.356243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.356273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.356628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.356659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.357004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.357037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.368 [2024-11-20 17:13:09.357276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.368 [2024-11-20 17:13:09.357306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.368 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.357638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.357668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.358024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.358055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.358405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.358438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.358818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.358847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.359215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.359248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.359628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.359659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.360072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.360343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.360378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.360719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.360750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.361099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.361234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.361262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.361519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.361548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.361791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.361819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.362043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.362072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.362443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.362475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.362817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.362848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.362980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.363011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.363378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.363410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.363757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.363790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.364140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.364175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.364528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.364558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.364925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.364955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.365178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.365211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.365460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.365488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.365715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.365745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.366120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.366153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.366496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.366527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.366744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.366775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.369 [2024-11-20 17:13:09.367146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.369 [2024-11-20 17:13:09.367198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.369 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.367580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.367610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.367971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.368001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.368362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.368394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.368632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.368661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.369018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.369051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.369151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.369188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.369655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.369688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.370083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.370351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.370382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.370722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.370756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.371126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.371157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.371386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.371418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.371653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.371685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.372057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.372101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.372324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.372356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.372687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.372716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.373036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.373066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.373435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.373468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.373835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.373869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.374084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.374118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.374365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.374397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.374802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.374833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.375097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.375132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.375459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.375491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.375840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.370 [2024-11-20 17:13:09.375872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.370 qpair failed and we were unable to recover it. 00:30:17.370 [2024-11-20 17:13:09.376251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.376285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.376648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.376681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.377064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.377093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.377504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.377537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.377910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.377939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.378312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.378343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.378735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.378764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.379114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.379143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.379357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.379387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.379634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.380013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.380043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.380389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.380421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.380793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.380826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.381192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.381223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.381584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.381613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.381716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.381749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.382030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.382059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.382405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.382435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.382811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.382839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.383207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.383238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.383469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.383498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.383867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.383895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.384271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.384302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.384661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.384690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.385058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.385086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.385332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.385365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.385742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.385771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.385986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.386014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.386387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.386424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.386755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.386785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.387232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.387629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.387658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.371 qpair failed and we were unable to recover it. 00:30:17.371 [2024-11-20 17:13:09.387998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.371 [2024-11-20 17:13:09.388027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.388413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.388796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.388825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.389194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.389225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.389548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.389577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.389945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.389975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.390319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.390350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.390718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.390746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.391111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.391140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.391382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.391412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.391792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.391823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.391970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.392001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.392297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.392328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.392568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.392600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.392852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.392881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.393106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.393134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.393501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.393533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.393900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.393929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.394298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.394328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.394707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.394735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.395112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.395140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.395392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.395425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.395804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.395833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.396194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.396225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.396469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.396498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.396880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.396909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.397264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.397294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.397547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.397575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.397921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.397949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.398234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.398463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.398492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.398842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.398872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.399004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.399036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.399414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.399447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.372 [2024-11-20 17:13:09.399694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.372 [2024-11-20 17:13:09.399723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.372 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.400139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.400180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.400541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.400578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.400784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.400812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.401166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.401197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.401544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.401573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.401935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.402181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.402212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.402620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.402650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.402795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.402822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.403029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.403061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.403403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.403697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.403728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.404080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.404109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.404482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.404512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.404885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.404914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.405285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.405316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.405671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.405700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.406062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.406090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.406338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.406369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.406741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.407125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.407155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.407499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.407529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.407889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.407918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.408280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.408311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.408689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.409081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.409109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.409354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.409387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.409649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.409678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.409949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.409977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.410324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.410354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.410722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.410751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.411105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.373 [2024-11-20 17:13:09.411135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.373 qpair failed and we were unable to recover it. 00:30:17.373 [2024-11-20 17:13:09.411465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.411495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.411901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.412117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.412145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.412398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.412428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.412791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.412820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.413185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.413215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.413594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.413623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.413969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.413998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.414331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.414360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.414693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.414729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.415052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.415081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.415439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.415469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.415720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.415752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.416118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.416147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.416515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.416546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.416888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.416918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.417143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.417181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.417504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.417532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.417766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.417794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.418109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.418138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.418522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.418551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.418930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.418959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.419328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.419358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.419744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.419773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.420144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.420181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.420433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.420462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.420821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.420851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.421103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.421131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.421246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.421278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.421682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.421711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.374 qpair failed and we were unable to recover it. 00:30:17.374 [2024-11-20 17:13:09.421927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.374 [2024-11-20 17:13:09.421955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.422209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.422238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.422651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.422681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.423017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.423045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.423399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.423431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.423797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.423825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.424206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.424237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.424595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.424624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.424987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.425016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.425387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.425416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.425774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.425802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.426040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.426068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.426463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.426492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.426858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.426887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.427235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.427266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.427613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.427642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.427998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.428028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.428388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.428417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.428631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.428660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.428913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.428948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.429286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.429689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.429717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.430064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.430094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.430196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.430225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.430472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.430502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.430823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.430853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.375 [2024-11-20 17:13:09.431066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.375 [2024-11-20 17:13:09.431094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.375 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.431428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.431459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.431826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.431854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.432185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.432215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.432567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.432596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.432975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.433005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.433380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.433410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.433785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.433814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.434177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.434209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.434548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.434576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.434940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.434969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.435231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.435265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.435614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.435644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.435961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.435990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.436310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.436340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.436656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.436685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.436913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.436941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.437151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.437187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.437549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.437578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.437711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.437739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.438133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.438169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.438513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.438542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.438920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.438949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.439220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.439251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.439488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.439521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.439885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.439914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.440171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.440601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.440761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.440792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.441165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.441196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.441400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.376 [2024-11-20 17:13:09.441429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.376 qpair failed and we were unable to recover it. 00:30:17.376 [2024-11-20 17:13:09.441811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.441840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.442052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.442081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.442405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.442447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.442683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.442711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.443082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.443111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.443450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.443481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.443861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.443889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.444105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.444132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.444473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.444502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.444879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.444908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.445272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.445304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.445549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.445914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.445943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.446300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.446331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.446717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.446745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.447104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.447133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.447536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.447567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.447921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.447950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.448185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.448214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.448326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.448357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.448710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.448739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.448832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.448860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.449218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.449248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.449612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.449642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.450031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.450060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.450285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.450315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.450689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.450718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.451086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.451115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.451500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.451530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.451894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.451924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.452155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.377 [2024-11-20 17:13:09.452193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.377 qpair failed and we were unable to recover it. 00:30:17.377 [2024-11-20 17:13:09.452539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.452567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.452930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.452959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.453315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.453347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.453571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.453599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.453936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.453965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.454324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.454354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.454711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.454740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.455109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.455139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.455513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.455542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.455682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.455711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.455949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.455980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.456363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.456402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.456733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.456763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.457145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.457185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.457422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.457453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.457838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.457867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.458243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.458274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.458475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.458504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.458923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.458952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.459300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.459329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.459710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.459739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.460096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.460126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.460510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.460540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.460925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.461341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.461372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.461740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.461770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.462008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.462039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.462285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.462315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.462553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.462583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.462973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.463001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.378 [2024-11-20 17:13:09.463388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.378 [2024-11-20 17:13:09.463418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.378 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.463786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.464171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.464201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.464533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.464563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.464783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.464811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.465191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.465221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.465469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.465497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.465914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.465943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.466301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.466332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.466572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.466600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.466977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.467006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.467375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.467406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.467627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.467655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.468029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.468057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.468398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.468428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.468789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.468820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.469112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.469140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.469505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.469535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.469900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.469929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.470157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.470197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.470566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.470596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.470944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.470979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.471336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.471367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.471462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.471492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.471812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.471840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.472196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.472226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.472608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.472638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.472999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.379 [2024-11-20 17:13:09.473027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.379 qpair failed and we were unable to recover it. 00:30:17.379 [2024-11-20 17:13:09.473411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.473441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.473667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.473695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.473943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.473971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.474195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.474226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.474634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.474663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.475029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.475058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.475266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.475296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.475704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.475734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.476093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.476122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.476506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.476542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.476770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.476800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.477143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.477180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.477541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.477569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.477931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.477960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.478181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.478211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.478350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.478378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.478744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.478773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.478999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.479027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.479395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.479426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.479668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.479697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.480083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.480111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.480497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.480528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.480621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.480648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.480978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.481074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.481479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.481587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.481998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.482036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.482497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.482605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.483056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.483093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.483367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.483399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.483750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.483779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.484093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.484123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.380 qpair failed and we were unable to recover it. 00:30:17.380 [2024-11-20 17:13:09.484496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.380 [2024-11-20 17:13:09.484527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.484752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.484780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.485033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.485074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.485454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.485486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.485857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.485885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.486257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.486289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.486528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.486556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.486923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.486951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.487143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.487201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.487587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.487616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.487980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.488008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.488266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.488297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.488512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.488541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.488895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.488923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.489260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.489290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.489626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.489655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.489874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.489905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.490292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.490322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.490566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.490595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.490973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.491002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.491412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.491442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.491831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.492000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.492030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.492374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.492404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.381 qpair failed and we were unable to recover it. 00:30:17.381 [2024-11-20 17:13:09.492771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.381 [2024-11-20 17:13:09.492801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.493157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.493199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.493556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.493586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.493851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.493879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.494202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.494233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.494577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.494607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.494973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.495001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.495381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.495411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.495788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.495817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.496025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.496053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.496257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.496287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.496611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.496639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.497010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.497039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.497413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.497445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.497813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.497843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.498087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.498117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.498379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.498410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.498774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.498803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.499169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.499209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.499591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.499621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.499957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.499987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.500377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.500409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.500647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.500677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.500898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.500927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.501186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.501222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.501588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.501616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.501874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.501903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.502245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.502274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.502667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.502695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.503049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.503078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.382 qpair failed and we were unable to recover it. 00:30:17.382 [2024-11-20 17:13:09.503411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.382 [2024-11-20 17:13:09.503442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.503791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.503819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.504197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.504229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.504577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.504607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.504964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.504992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.505388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.505419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.505759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.505787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.506165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.506196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.506608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.506638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.506986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.507018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.507393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.507423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.507761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.507790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.508198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.508229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.508614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.508643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.508867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.508895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.509255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.509286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.509637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.509666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.510020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.510049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.510399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.510429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.510719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.510747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.511084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.511112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.511517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.511547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.383 [2024-11-20 17:13:09.511894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.383 [2024-11-20 17:13:09.511923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.383 qpair failed and we were unable to recover it. 00:30:17.666 [2024-11-20 17:13:09.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.512338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.512662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.512701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.512976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.513009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.513343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.513372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.513723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.513751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.514111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.514149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.514519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.514550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.514783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.514812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.515135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.515175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.515436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.515468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.515852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.515882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.516239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.516271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.516636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.516668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.516814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.516845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.517222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.517253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.517616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.517646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.518011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.518039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.518274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.518305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.518551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.518579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.518948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.519197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.519228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.519594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.519625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.519990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.520018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.520415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.520771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.520800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.521167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.521197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.521483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.521852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.521882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.522243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.522273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.522511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.522540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.522918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.522947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.523306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.523337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.523694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.523723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.524048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.524076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.524447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.524477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.524710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.524739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.667 [2024-11-20 17:13:09.525119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.667 [2024-11-20 17:13:09.525148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.667 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.525422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.525450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.525798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.525827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.526045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.526073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.526441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.526471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.526837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.526866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.527231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.527262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.527483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.527511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.527873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.527901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.528275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.528312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.528560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.528588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.528953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.528981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.529213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.529244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.529628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.529657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.530026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.530054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.530419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.530448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.530826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.530855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.531113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.531145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.531516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.531545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.531907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.531937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.532306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.532337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.532605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.532984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.533012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.533389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.533419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.533776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.533804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.534143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.534179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.534536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.534564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.534801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.534830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.535211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.535240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.535468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.535496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.535868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.535897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.536269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.536299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.536692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.536722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.537088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.537117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.537487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.537516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.537739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.537769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.538144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.538192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.538574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.668 [2024-11-20 17:13:09.538603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.668 qpair failed and we were unable to recover it. 00:30:17.668 [2024-11-20 17:13:09.538969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.538998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.539331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.539363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.539576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.539605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.539875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.539905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.540247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.540277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.540652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.540681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.541049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.541078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.541467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.541497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.541718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.541747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.542034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.542062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.542423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.542452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.542816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.542844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.543211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.543241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.543594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.543622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.543978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.544006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.544366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.544397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.544756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.544784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.545130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.545166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.545497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.545526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.545889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.545917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.546272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.546302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.546513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.546541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.546871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.546900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.547257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.547286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.547527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.547555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.547789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.547819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.548081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.548113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.548487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.548518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.548867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.548896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.669 qpair failed and we were unable to recover it. 00:30:17.669 [2024-11-20 17:13:09.549240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.669 [2024-11-20 17:13:09.549271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.549533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.549561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.549913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.549941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.550176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.550205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.550570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.550598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.550984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.551012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.551393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.551423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.551636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.551665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.552015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.552044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.552387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.552423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.552794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.552823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.553185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.553216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.553543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.553572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.553944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.553973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.554335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.554367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.554628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.554656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.555005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.555034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.555255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.555285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.555500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.555528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.555893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.555922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.556137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.556173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.556551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.556580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.556790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.556818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.557194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.557225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.557463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.557493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.557857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.557886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.558251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.670 [2024-11-20 17:13:09.558281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.670 qpair failed and we were unable to recover it. 00:30:17.670 [2024-11-20 17:13:09.558492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.558522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.558743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.558771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.559137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.559175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.559539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.559787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.559816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.560178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.560208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.560572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.560600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.560956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.560986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.561349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.561380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.561804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.561833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.562191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.562220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.562463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.562491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.562863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.562893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.563263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.563293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.563665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.563694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.564063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.564093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.564470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.564501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.564864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.564893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.565278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.565311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.565679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.565710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.566105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.566135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.566535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.566566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.566939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.566975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.567191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.567222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.567598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.567627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.567991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.568020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.671 [2024-11-20 17:13:09.568394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.671 [2024-11-20 17:13:09.568424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.671 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.568649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.568678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.568935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.568963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.569324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.569355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.569731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.569762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.570130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.570167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.570627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.570658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.571015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.571046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.571424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.571455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.571666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.571696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.572072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.572100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.572466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.572499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.572865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.572897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.573249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.573281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.573513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.573545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.573903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.573932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.574271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.574303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.574681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.574711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.575089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.575118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.575487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.575517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.575867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.575896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.576257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.576290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.672 [2024-11-20 17:13:09.576507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.672 [2024-11-20 17:13:09.576536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.672 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.576874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.576905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.577141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.577182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.577545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.577573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.577927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.577957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.578182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.578213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.578561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.578590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.578988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.579018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.579279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.579313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.579546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.579575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.579938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.579967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.580325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.580355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.580708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.580739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.581105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.581137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.581534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.581571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.581945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.581975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.582194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.582223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.582515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.582543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.582889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.582918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.583292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.583324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.583667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.583697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.584072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.584100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.584481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.584894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.584925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.585281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.585314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.585697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.585728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.585980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.586011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.673 qpair failed and we were unable to recover it. 00:30:17.673 [2024-11-20 17:13:09.586270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.673 [2024-11-20 17:13:09.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.586654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.586683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.586924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.586955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.587318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.587349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.587566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.587598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.587958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.587989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.588357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.588389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.588608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.588636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.588996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.589027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.589418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.589448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.589787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.589818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.590059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.590089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.590408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.590440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.590683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.590712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.590981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.591011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.591382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.591413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.591780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.591810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.592175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.592206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.592564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.592595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.592947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.593322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.593352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.593591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.593621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.594004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.594034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.594430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.594460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.594686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.594715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.595173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.595205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.595579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.595611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.674 [2024-11-20 17:13:09.595832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.674 [2024-11-20 17:13:09.595870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.674 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.596221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.596252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.596631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.596662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.597037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.597070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.597428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.597458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.597846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.597877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.598111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.598146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.598552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.598581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.598840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.598871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.599120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.599149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.599529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.599558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.599973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.600003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.600388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.600420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.600763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.600794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.601147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.601187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.601397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.601426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.601795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.601824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.602191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.602221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.602594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.602622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.602987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.603017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.603247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.603277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.603638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.603668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.604023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.604054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.604273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.604306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.604693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.604723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.605073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.605104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.605457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.675 [2024-11-20 17:13:09.605488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.675 qpair failed and we were unable to recover it. 00:30:17.675 [2024-11-20 17:13:09.605847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.605877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.606239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.606270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.606498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.606530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.606868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.606899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.607237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.607269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.607585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.607615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.607955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.607986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.608359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.608390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.608750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.608780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.609027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.609057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.609410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.609443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.609664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.609693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.610070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.610100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.610359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.610408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.610751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.610782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.611147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.611198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.611534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.611564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.611937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.611966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.612330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.612361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.612697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.612728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.613104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.613136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.613240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.613272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.613615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.613644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.613851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.613881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.614240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.614270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.614604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.614636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.614989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.615018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.615408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.615440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.676 [2024-11-20 17:13:09.615792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.676 [2024-11-20 17:13:09.615824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.676 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.616192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.616223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.616609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.616640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.616930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.616963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.617351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.617381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.617736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.617769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.618021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.618051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.618416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.618446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.618814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.618845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.619195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.619227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.619618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.619647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.619991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.620021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.620387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.620420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.620742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.620773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.621131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.621172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.621534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.621562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.621803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.621835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.622241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.622273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.622652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.622680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.623034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.623064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.623293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.623326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.623593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.623623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.623983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.624244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.624278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.624648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.624679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.625045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.625083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.625468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.625499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-11-20 17:13:09.625847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.677 [2024-11-20 17:13:09.625879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.626242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.626273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.626620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.626652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.626986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.627016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.627244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.627275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.627518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.627548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.627922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.627953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.628177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.628208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.628553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.628582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.628929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.628957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.629173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.629205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.629317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.629349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.629682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.629711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.629931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.629960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.630322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.630353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.630690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.630720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.631083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.631113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.631467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.631499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.631864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.631893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.632142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.632179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.632401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.632431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.632812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.632842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.633095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.633124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-11-20 17:13:09.633495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.678 [2024-11-20 17:13:09.633526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.633888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.633917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.634274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.634305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.634731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.634761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.635130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.635179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.635555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.635585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.635944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.635974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.636358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.636388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.636618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.636647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.637034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.637064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.637406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.637437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.637783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.637813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.638153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.638194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.638294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.638323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.638838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.638944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.639507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.639624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.640035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.640073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.640553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.640657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.641108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.641145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.641514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.641545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.641774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.641803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.642064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.642100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.642399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.642430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.642767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.642798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.643051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.643081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.643325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.679 [2024-11-20 17:13:09.643359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.679 [2024-11-20 17:13:09.643766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.643796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.644154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.644193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.644565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.644595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.644945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.644977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.645357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.645388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.645615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.645648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.645858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.645889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.646127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.646156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.646492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.646523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.646747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.646778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.647129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.647170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.647414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.647445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.647810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.647840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.648230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.648262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.648626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.648655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.649005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.649034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.649419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.649450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.649820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.649850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.649950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.649978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.650319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.650350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.650747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.650776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.651011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.651041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.651405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.651435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.651796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.651825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.652139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.652177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.652571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.680 [2024-11-20 17:13:09.652902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.680 [2024-11-20 17:13:09.652931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.680 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.653181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.653211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.653482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.653512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.653762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.653797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.654125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.654153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.654406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.654434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.654796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.654825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.655189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.655219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.655421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.655449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.655870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.655899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.656248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.656278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.656647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.656675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.656900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.656932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.657288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.657327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.657547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.657575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.657952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.657981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.658196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.658226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.658630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.658660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.658902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.658930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.659381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.659411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.659763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.659791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.660146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.660185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.660553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.660581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.660947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.661377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.661409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.661764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.661793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.662035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.681 [2024-11-20 17:13:09.662063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.681 qpair failed and we were unable to recover it. 00:30:17.681 [2024-11-20 17:13:09.662445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.662475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.662839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.662867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.663238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.663268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.663504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.663533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.663823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.663852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.664226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.664256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.664500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.664528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.664763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.664792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.665037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.665066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.665418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.665451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.665816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.665846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.666218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.666250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.666653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.666681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.666902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.666931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.667220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.667249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.667523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.667551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.667902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.667945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.668189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.668218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.668501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.668530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.668936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.668964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.669248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.669279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.669562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.669594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.669834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.669863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.670256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.670287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.670649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.670677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.671053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.682 [2024-11-20 17:13:09.671081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.682 qpair failed and we were unable to recover it. 00:30:17.682 [2024-11-20 17:13:09.671385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.671415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.671756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.671785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.672107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.672135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.672414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.672443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.672793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.672822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.673056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.673083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.673319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.673351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.673608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.673637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.673855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.674256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.674285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.674690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.674719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.674815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.674842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.675176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.675206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.675566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.675595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.675837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.675865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.676048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.676076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.676444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.676474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.676694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.676723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.677057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.677086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.677306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.677338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.677565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.677593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.677806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.677836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.678113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.678141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.678400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.678429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.678672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.678700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.678932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.678960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.683 qpair failed and we were unable to recover it. 00:30:17.683 [2024-11-20 17:13:09.679199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.683 [2024-11-20 17:13:09.679233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.679584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.680013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.680390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.680420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.680634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.680671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.680887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.680916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.681286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.681317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.681693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.681722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.681967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.682331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.682360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.682746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.682775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.683146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.683184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.683545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.683574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.683923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.683952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.684339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.684369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.684695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.684724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.684940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.684969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.685193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.685225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.685548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.685578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.685812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.685845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.686192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.686223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.686587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.686616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.686980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.687009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.687380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.687411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.687765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.687795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.688154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.688192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.688397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.684 [2024-11-20 17:13:09.688427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.684 qpair failed and we were unable to recover it. 00:30:17.684 [2024-11-20 17:13:09.688790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.688820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.689041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.689071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.689282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.689315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.689696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.689725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.685 [2024-11-20 17:13:09.690089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.690120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:17.685 [2024-11-20 17:13:09.690532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.690563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.685 [2024-11-20 17:13:09.690794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.690824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.685 [2024-11-20 17:13:09.691033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.691064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.685 [2024-11-20 17:13:09.691409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.691439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.691538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.691566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3694000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.691959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.692068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.692491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.692599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.693000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.693501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.693612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.693899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.693937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.694298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.694343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.694667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.694699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.695050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.695079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.695461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.695493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.695846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.685 [2024-11-20 17:13:09.695876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.685 qpair failed and we were unable to recover it. 00:30:17.685 [2024-11-20 17:13:09.696233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.696266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.696626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.696655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.697022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.697052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.697286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.697317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.697698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.697727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.697969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.697997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.698428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.698459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.698702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.698730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.698989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.699023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.699490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.699521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.699888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.699917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.700295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.700326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.700567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.700598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.700977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.701008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.701348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.701380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.701742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.701772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.702136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.702198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.702452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.702481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.702839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.702868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.703241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.703271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.703499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.703529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.703878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.704295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.704326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.704690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.704720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.705086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.705119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.705496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.705532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.686 qpair failed and we were unable to recover it. 00:30:17.686 [2024-11-20 17:13:09.705946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.686 [2024-11-20 17:13:09.705977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.706326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.706359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.706604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.706634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.706988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.707018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.707152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.707196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.707301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.707329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.707706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.707735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.708094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.708124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.708430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.708460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.708802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.708841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.709197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.709228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.709609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.709638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.709845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.709874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.710244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.710274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.710624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.710654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.711011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.711040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.711381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.711414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.711796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.711824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.712051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.712080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.712446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.712476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.712671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.712699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.712943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.712976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.713321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.713352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.713710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.713740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.714108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.714136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.714476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.714505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.687 qpair failed and we were unable to recover it. 00:30:17.687 [2024-11-20 17:13:09.714734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.687 [2024-11-20 17:13:09.714762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.715142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.715180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.715397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.715426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.715754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.715784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.716149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.716189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.716570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.716598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.716857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.716886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.717246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.717277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.717541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.717570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.717962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.717990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.718242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.718274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.718632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.718663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.719052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.719080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.719425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.719456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.719807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.719837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.720188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.720217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.720432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.720462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.720821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.720849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.721065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.721093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.721466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.721498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.721762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.721795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.722045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.722074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.722291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.722321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.722706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.722742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.723056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.723085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.723314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.723344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.723703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.688 [2024-11-20 17:13:09.723733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.688 qpair failed and we were unable to recover it. 00:30:17.688 [2024-11-20 17:13:09.724092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.724121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.724487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.724516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.724877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.724909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.725233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.725265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.725627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.725655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.726016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.726045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.726391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.726422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.726782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.726810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.727203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.727233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.727635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.727664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.728021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.728050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.728425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.728455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.728812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.728841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.729204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.729234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.729483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.729512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.729878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.729906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.730269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.730301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.730643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.730672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.731026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.731056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.731421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.731451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.731809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.731839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.732210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.732240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.732610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.732639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.732860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.732889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 [2024-11-20 17:13:09.733289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.733320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.689 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.689 [2024-11-20 17:13:09.733667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.689 [2024-11-20 17:13:09.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.689 qpair failed and we were unable to recover it. 00:30:17.690 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.690 [2024-11-20 17:13:09.734066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.734097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.690 [2024-11-20 17:13:09.734456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.734486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.690 [2024-11-20 17:13:09.734707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.734737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.735115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.735146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.735495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.735525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.735627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.735656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.735905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.735938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.736299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.736332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.736727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.736972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.737000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.737366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.737397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.737748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.737777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.738069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.738465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.738495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.738863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.738892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.739278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.739310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.739667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.739696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.739933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.739961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.740298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.740329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.740701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.740730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.741094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.741123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.741552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.741582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.741949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.741977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.690 [2024-11-20 17:13:09.742363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.690 [2024-11-20 17:13:09.742394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.690 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.742766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.742794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.743147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.743185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.743525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.743553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.743896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.743927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.744146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.744182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.744571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.744600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.744941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.744971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.745319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.745348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.745690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.745719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.746091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.746120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.746540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.746571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.746905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.746936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.747169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.747200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.747588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.747617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.748007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.748036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.748251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.748280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.748504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.748533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.748900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.748929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.749155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.749191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.749540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.749570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.749953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.749981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.750331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.750361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.750721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.750750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.751111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.751140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.751517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.691 [2024-11-20 17:13:09.751912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.691 [2024-11-20 17:13:09.751947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.691 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.752283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.752314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.752542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.752571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.752822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.752850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.753097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.753125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.753520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.753550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.753763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.753791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.754062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.754090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.754419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.754448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.754813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.754843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.755200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.755230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.755617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.755645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.755998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.756026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.756300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.756330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.756671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.756700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.757066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.757095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.757468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.757498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.757739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.757771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.757926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.757954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.758317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.758346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.758714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.758743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.759114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.692 [2024-11-20 17:13:09.759143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.692 qpair failed and we were unable to recover it. 00:30:17.692 [2024-11-20 17:13:09.759511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.759541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.759921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.759949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.760319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.760350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.760700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.760728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.761101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.761518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.761548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.761775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.761803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.762170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.762201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.762450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.762482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.762832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.762861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.763225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.763255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.763569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.763599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.763817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.763845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.764193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.764230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.764490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.764519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.764774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.764802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.765019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.765048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.765285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.765314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.765692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.765722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.766094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.766122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.766480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.766511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.766869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.766898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.767259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.767289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.767667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.768041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.768074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.768414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.768444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.693 qpair failed and we were unable to recover it. 00:30:17.693 [2024-11-20 17:13:09.768803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.693 [2024-11-20 17:13:09.768832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.769094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.769128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.769504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.769534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.769792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.769820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.770204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.770235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.770607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.770637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.771010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.771042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.771430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.771461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 Malloc0 00:30:17.694 [2024-11-20 17:13:09.771838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.771866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.772193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.772223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.772381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.772408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.694 [2024-11-20 17:13:09.772651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.772684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:17.694 [2024-11-20 17:13:09.773017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.773046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.694 [2024-11-20 17:13:09.773318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.773348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.694 [2024-11-20 17:13:09.773708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.773736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.774026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.774054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.774422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.774460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.774672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.774701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.775062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.775090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.775443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.775474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.775848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.775875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.776101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.776129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.776535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.776565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.776913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.776942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.777316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.694 [2024-11-20 17:13:09.777346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.694 qpair failed and we were unable to recover it. 00:30:17.694 [2024-11-20 17:13:09.777719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.777748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.778104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.778134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.778524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.778553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.778823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.778851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.778983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.695 [2024-11-20 17:13:09.779199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.779236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.779619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.779647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.780014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.780043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.780377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.780407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.780647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.780676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.781002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.781031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.781288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.781319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.781683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.781713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.781953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.781983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.782336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.782365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.782733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.782763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.783125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.783154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.783505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.783535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.783791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.783820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.784180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.784211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.784555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.784584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.784958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.784988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.785178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.785212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.785576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.785605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.786009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.786393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.786425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.786773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.695 [2024-11-20 17:13:09.786803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.695 qpair failed and we were unable to recover it. 00:30:17.695 [2024-11-20 17:13:09.787168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.787198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.787458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.787487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.787851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.787880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.696 [2024-11-20 17:13:09.788252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.788282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.696 [2024-11-20 17:13:09.788649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.788680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.696 [2024-11-20 17:13:09.789036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.789066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.696 [2024-11-20 17:13:09.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.789449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.789818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.789847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.790198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.790230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.790618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.790647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.791004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.791034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.791412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.791442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.791799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.791828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.792227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.792258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.792501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.792850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.792880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.793239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.793269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.793524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.793558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.793788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.793818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.794205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.794236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.794475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.794505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.794751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.794780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.795034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.795062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.795474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.795503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.696 [2024-11-20 17:13:09.795767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.696 [2024-11-20 17:13:09.795798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.696 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.796156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.796194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.796424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.796453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.796662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.796690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.797051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.797088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.797296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.797325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.797706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.797736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.798086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.798458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.798489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.798852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.798881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.799257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.799287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.799580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.799609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.697 [2024-11-20 17:13:09.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.800002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.800224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.697 [2024-11-20 17:13:09.800573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.800603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.697 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.697 [2024-11-20 17:13:09.800971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.800999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.801259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.801288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.801390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.801425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.801741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.801769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.802110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.802138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.802292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.802326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.802680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.802708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.803057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.803086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.803459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.803490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.803844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.803873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.697 qpair failed and we were unable to recover it. 00:30:17.697 [2024-11-20 17:13:09.804129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.697 [2024-11-20 17:13:09.804157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.804539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.804568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.804938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.804967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.805347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.805377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.805708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.805737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.806089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.806117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.806509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.806541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.806900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.806929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.807282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.807313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.807664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.807693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.807948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.807979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.808226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.808605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.808633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.809005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.809033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.809387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.809417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.809756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.809785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.810143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.810191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.810562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.810590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.810967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.810997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.811404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.811434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.811802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.811832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.812054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.812084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.698 [2024-11-20 17:13:09.812471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.812503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.698 [2024-11-20 17:13:09.812828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.812858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.698 [2024-11-20 17:13:09.813217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.813628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.813657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.813984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.814013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.698 [2024-11-20 17:13:09.814345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.698 [2024-11-20 17:13:09.814376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.698 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.814728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.814757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.814989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.815020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.815240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.815271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.815668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.815698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.815923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.815953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.816197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.816232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.816601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.816630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.816979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.817010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.817397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.817429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.817788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.817818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.818037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.818067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.818428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.818460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.818811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.818840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.819188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.699 [2024-11-20 17:13:09.819219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3690000b90 with addr=10.0.0.2, port=4420 00:30:17.699 qpair failed and we were unable to recover it. 00:30:17.699 [2024-11-20 17:13:09.819382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.963 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.963 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:17.963 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.963 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.963 [2024-11-20 17:13:09.830343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.963 [2024-11-20 17:13:09.830480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.963 [2024-11-20 17:13:09.830522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.963 [2024-11-20 17:13:09.830540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.963 [2024-11-20 17:13:09.830555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.830601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.964 17:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2154297 00:30:17.964 [2024-11-20 17:13:09.840128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.840227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.840261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.840280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.840295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.840331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.850136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.850219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.850245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.850258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.850271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.850296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.860144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.860229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.860249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.860257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.860264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.860284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.870109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.870182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.870200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.870208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.870215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.870232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.880102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.880212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.880230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.880239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.880245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.880263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.890115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.890180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.890198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.890205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.890212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.890230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.900140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.900226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.900249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.900257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.900264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.900283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.910242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.910321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.910345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.910352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.910359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.910377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.920202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.920263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.920281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.920288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.920295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.920313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.930251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.930327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.930344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.930352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.930358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.930375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.940245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.940345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.940362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.940370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.940377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.940393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.950284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.964 [2024-11-20 17:13:09.950362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.964 [2024-11-20 17:13:09.950380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.964 [2024-11-20 17:13:09.950393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.964 [2024-11-20 17:13:09.950399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.964 [2024-11-20 17:13:09.950417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.964 qpair failed and we were unable to recover it. 00:30:17.964 [2024-11-20 17:13:09.960281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:09.960341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:09.960358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:09.960366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:09.960372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:09.960390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:09.970340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:09.970404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:09.970423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:09.970430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:09.970437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:09.970455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:09.980263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:09.980370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:09.980388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:09.980395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:09.980402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:09.980420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:09.990427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:09.990523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:09.990547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:09.990559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:09.990565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:09.990584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.000448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.000565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.000586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.000594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.000601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.000618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.010519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.010597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.010619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.010628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.010636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.010656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.020542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.020621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.020638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.020646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.020653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.020671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.030557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.030636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.030653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.030661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.030668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.030685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.040683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.040766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.040791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.040800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.040809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.040830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.050653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.050724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.050743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.050751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.050758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.050776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.060677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.060749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.060767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.060774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.060781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.060799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.070702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.070777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.070805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.070816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.070826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.070862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.965 [2024-11-20 17:13:10.080690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.965 [2024-11-20 17:13:10.080787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.965 [2024-11-20 17:13:10.080810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.965 [2024-11-20 17:13:10.080828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.965 [2024-11-20 17:13:10.080836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.965 [2024-11-20 17:13:10.080857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.965 qpair failed and we were unable to recover it. 00:30:17.966 [2024-11-20 17:13:10.090722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.966 [2024-11-20 17:13:10.090803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.966 [2024-11-20 17:13:10.090822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.966 [2024-11-20 17:13:10.090830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.966 [2024-11-20 17:13:10.090837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.966 [2024-11-20 17:13:10.090855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.966 qpair failed and we were unable to recover it. 00:30:17.966 [2024-11-20 17:13:10.100744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.966 [2024-11-20 17:13:10.100825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.966 [2024-11-20 17:13:10.100863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.966 [2024-11-20 17:13:10.100874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.966 [2024-11-20 17:13:10.100881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.966 [2024-11-20 17:13:10.100907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.966 qpair failed and we were unable to recover it. 00:30:17.966 [2024-11-20 17:13:10.110806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.966 [2024-11-20 17:13:10.110882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.966 [2024-11-20 17:13:10.110919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.966 [2024-11-20 17:13:10.110929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.966 [2024-11-20 17:13:10.110937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.966 [2024-11-20 17:13:10.110963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.966 qpair failed and we were unable to recover it. 00:30:17.966 [2024-11-20 17:13:10.120716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.966 [2024-11-20 17:13:10.120811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.966 [2024-11-20 17:13:10.120832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.966 [2024-11-20 17:13:10.120840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.966 [2024-11-20 17:13:10.120849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.966 [2024-11-20 17:13:10.120876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.966 qpair failed and we were unable to recover it. 00:30:17.966 [2024-11-20 17:13:10.130808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.966 [2024-11-20 17:13:10.130876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.966 [2024-11-20 17:13:10.130895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.966 [2024-11-20 17:13:10.130903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.966 [2024-11-20 17:13:10.130910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:17.966 [2024-11-20 17:13:10.130928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.966 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.140761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.140876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.140905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.140915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.140923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.140948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.150918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.150992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.151012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.151021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.151028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.151046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.160942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.161011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.161030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.161038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.161045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.161064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.170937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.170999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.171018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.171026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.171033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.171051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.180993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.181075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.181096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.181109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.181116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.181135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.191078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.191183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.191202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.191211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.191218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.191235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.201074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.201153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.201176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.201184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.201191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.201210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.230 [2024-11-20 17:13:10.210959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.230 [2024-11-20 17:13:10.211046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.230 [2024-11-20 17:13:10.211072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.230 [2024-11-20 17:13:10.211080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.230 [2024-11-20 17:13:10.211086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.230 [2024-11-20 17:13:10.211105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.230 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.221118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.221197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.221217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.221225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.221231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.221250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.231177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.231262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.231280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.231288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.231295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.231314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.241164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.241223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.241241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.241250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.241258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.241276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.251181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.251248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.251264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.251272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.251284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.251301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.261222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.261319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.261336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.261344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.261351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.261368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.271293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.271369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.271385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.271393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.271400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.271418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.281306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.281401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.281418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.281426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.281432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.281450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.291285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.291381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.291399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.291409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.291416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.291434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.301271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.301336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.301353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.301361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.301367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.301384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.311418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.311506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.231 [2024-11-20 17:13:10.311529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.231 [2024-11-20 17:13:10.311543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.231 [2024-11-20 17:13:10.311550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.231 [2024-11-20 17:13:10.311569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.231 qpair failed and we were unable to recover it. 00:30:18.231 [2024-11-20 17:13:10.321441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.231 [2024-11-20 17:13:10.321512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.321531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.321539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.321546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.321563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.331433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.331492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.331511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.331518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.331526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.331544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.341504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.341569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.341592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.341599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.341606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.341624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.351578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.351688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.351704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.351712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.351718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.351735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.361522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.361592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.361610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.361617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.361623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.361641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.371591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.371660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.371677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.371685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.371691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.371707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.381638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.381731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.232 [2024-11-20 17:13:10.381748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.232 [2024-11-20 17:13:10.381755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.232 [2024-11-20 17:13:10.381768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.232 [2024-11-20 17:13:10.381785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.232 qpair failed and we were unable to recover it. 00:30:18.232 [2024-11-20 17:13:10.391669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.232 [2024-11-20 17:13:10.391781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.259 [2024-11-20 17:13:10.391799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.259 [2024-11-20 17:13:10.391807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.259 [2024-11-20 17:13:10.391814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.259 [2024-11-20 17:13:10.391831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.524 [2024-11-20 17:13:10.401645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.524 [2024-11-20 17:13:10.401713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.524 [2024-11-20 17:13:10.401732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.524 [2024-11-20 17:13:10.401740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.524 [2024-11-20 17:13:10.401747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.524 [2024-11-20 17:13:10.401764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.524 qpair failed and we were unable to recover it. 00:30:18.524 [2024-11-20 17:13:10.411700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.524 [2024-11-20 17:13:10.411768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.524 [2024-11-20 17:13:10.411787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.524 [2024-11-20 17:13:10.411798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.524 [2024-11-20 17:13:10.411805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.524 [2024-11-20 17:13:10.411824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.524 qpair failed and we were unable to recover it. 00:30:18.524 [2024-11-20 17:13:10.421724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.524 [2024-11-20 17:13:10.421792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.524 [2024-11-20 17:13:10.421808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.524 [2024-11-20 17:13:10.421816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.524 [2024-11-20 17:13:10.421822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.524 [2024-11-20 17:13:10.421840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.524 qpair failed and we were unable to recover it. 00:30:18.524 [2024-11-20 17:13:10.431794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.524 [2024-11-20 17:13:10.431871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.524 [2024-11-20 17:13:10.431892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.524 [2024-11-20 17:13:10.431902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.524 [2024-11-20 17:13:10.431910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.431928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.441806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.441869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.441887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.441895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.441902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.441919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.451822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.451896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.451934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.451944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.451951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.451977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.461836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.461912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.461932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.461940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.461947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.461966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.471903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.471972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.472003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.472011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.472017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.472035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.481913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.482024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.482043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.482051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.482057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.482075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.491941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.492016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.492035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.492042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.492049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.492066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.501993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.502087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.502106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.502114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.502121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.502138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.512042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.512117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.512137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.512151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.512164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.512183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.522074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.522137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.522156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.522170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.522177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.522195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.532074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.532141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.532165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.532173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.532180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.532197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.542123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.542200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.542217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.542225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.542231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.542249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.552146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.552216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.552232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.525 [2024-11-20 17:13:10.552240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.525 [2024-11-20 17:13:10.552246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.525 [2024-11-20 17:13:10.552264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.525 qpair failed and we were unable to recover it. 00:30:18.525 [2024-11-20 17:13:10.562192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.525 [2024-11-20 17:13:10.562291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.525 [2024-11-20 17:13:10.562309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.562317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.562324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.562341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.572195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.572262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.572280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.572287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.572294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.572312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.582201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.582277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.582295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.582302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.582309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.582326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.592297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.592374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.592390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.592398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.592405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.592422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.602269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.602338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.602356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.602364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.602371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.602388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.612284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.612347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.612364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.612372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.612378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.612396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.622337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.622405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.622422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.622429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.622436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.622453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.632304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.632373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.632391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.632399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.632406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.632425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.642454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.642520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.642538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.642553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.642559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.642577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.652418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.652484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.652502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.652510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.652516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.652533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.662495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.662560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.662578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.662585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.662592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.662609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.672522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.672636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.672653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.672661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.672667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.672683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.682557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.526 [2024-11-20 17:13:10.682614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.526 [2024-11-20 17:13:10.682633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.526 [2024-11-20 17:13:10.682640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.526 [2024-11-20 17:13:10.682646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.526 [2024-11-20 17:13:10.682670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.526 qpair failed and we were unable to recover it. 00:30:18.526 [2024-11-20 17:13:10.692551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.527 [2024-11-20 17:13:10.692614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.527 [2024-11-20 17:13:10.692632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.527 [2024-11-20 17:13:10.692639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.527 [2024-11-20 17:13:10.692645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.527 [2024-11-20 17:13:10.692662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.527 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.702593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.702659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.702676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.702684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.702690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.702708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.712700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.712770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.712787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.712795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.712801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.712818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.722668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.722742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.722760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.722767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.722774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.722791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.732686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.732765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.732802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.732812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.732819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.732844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.742735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.742812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.742844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.742852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.742859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.742882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.752760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.752829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.752847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.752855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.752861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.752880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.762812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.762888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.762906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.762913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.762919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.762937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.772826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.772888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.772913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.772920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.772927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.772946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.782885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.782959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.782978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.782985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.782992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.783010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.792914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.793022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.793060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.793070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.793078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.791 [2024-11-20 17:13:10.793103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.791 qpair failed and we were unable to recover it. 00:30:18.791 [2024-11-20 17:13:10.802897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.791 [2024-11-20 17:13:10.802990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.791 [2024-11-20 17:13:10.803011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.791 [2024-11-20 17:13:10.803020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.791 [2024-11-20 17:13:10.803027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.803046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.812936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.813003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.813021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.813029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.813043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.813062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.822970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.823035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.823052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.823060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.823066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.823084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.833021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.833098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.833117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.833126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.833133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.833153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.843024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.843091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.843108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.843116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.843122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.843140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.853068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.853138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.853155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.853168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.853175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.853192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.863102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.863176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.863194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.863201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.863208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.863225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.873199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.873265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.873284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.873291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.873298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.873315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.883139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.883205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.883225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.883232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.883239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.883256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.893183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.893244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.893263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.893270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.893277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.893294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.903206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.903270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.903295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.903302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.903309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.903327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.913271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.913351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.913367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.913375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.913381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.913398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.923310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.923372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.923389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.923397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.923403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.792 [2024-11-20 17:13:10.923421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.792 qpair failed and we were unable to recover it. 00:30:18.792 [2024-11-20 17:13:10.933312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.792 [2024-11-20 17:13:10.933374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.792 [2024-11-20 17:13:10.933392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.792 [2024-11-20 17:13:10.933399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.792 [2024-11-20 17:13:10.933405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.793 [2024-11-20 17:13:10.933423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.793 qpair failed and we were unable to recover it. 00:30:18.793 [2024-11-20 17:13:10.943354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.793 [2024-11-20 17:13:10.943425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.793 [2024-11-20 17:13:10.943462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.793 [2024-11-20 17:13:10.943470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.793 [2024-11-20 17:13:10.943482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.793 [2024-11-20 17:13:10.943507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.793 qpair failed and we were unable to recover it. 00:30:18.793 [2024-11-20 17:13:10.953572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.793 [2024-11-20 17:13:10.953654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.793 [2024-11-20 17:13:10.953673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.793 [2024-11-20 17:13:10.953681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.793 [2024-11-20 17:13:10.953687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:18.793 [2024-11-20 17:13:10.953706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.793 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:10.963456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:10.963568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:10.963586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:10.963595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:10.963602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:10.963620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:10.973446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:10.973508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:10.973526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:10.973534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:10.973541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:10.973559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:10.983511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:10.983581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:10.983603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:10.983614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:10.983623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:10.983642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:10.993568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:10.993643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:10.993661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:10.993669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:10.993676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:10.993693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.003559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.003627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.003647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.003654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:11.003661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:11.003680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.013478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.013545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.013564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.013571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:11.013578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:11.013596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.023617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.023687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.023704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.023712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:11.023718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:11.023735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.033668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.033748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.033772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.033779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:11.033785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:11.033803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.043565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.043649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.043667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.043674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:11.043681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:11.043697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.053700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.053770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.053787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.053795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.056 [2024-11-20 17:13:11.053801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.056 [2024-11-20 17:13:11.053818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.056 qpair failed and we were unable to recover it. 00:30:19.056 [2024-11-20 17:13:11.063711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.056 [2024-11-20 17:13:11.063773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.056 [2024-11-20 17:13:11.063790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.056 [2024-11-20 17:13:11.063797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.063804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.063821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.073790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.073870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.073888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.073901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.073908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.073926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.083782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.083838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.083856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.083863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.083870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.083887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.093822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.093894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.093912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.093919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.093926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.093943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.103835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.103903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.103921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.103928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.103935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.103952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.113914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.113987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.114004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.114012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.114018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.114035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.123936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.124000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.124018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.124026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.124032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.124049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.133813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.133885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.133907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.133915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.133921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.133940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.143997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.144062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.144080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.144088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.144094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.144112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.154054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.154128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.154146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.154154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.154165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.154183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.164005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.164076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.164094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.164101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.164108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.164125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.174087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.174155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.174177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.174184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.174191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.174208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.184154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.057 [2024-11-20 17:13:11.184242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.057 [2024-11-20 17:13:11.184259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.057 [2024-11-20 17:13:11.184266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.057 [2024-11-20 17:13:11.184273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.057 [2024-11-20 17:13:11.184289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.057 qpair failed and we were unable to recover it. 00:30:19.057 [2024-11-20 17:13:11.194190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.058 [2024-11-20 17:13:11.194266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.058 [2024-11-20 17:13:11.194282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.058 [2024-11-20 17:13:11.194290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.058 [2024-11-20 17:13:11.194297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.058 [2024-11-20 17:13:11.194314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.058 qpair failed and we were unable to recover it. 00:30:19.058 [2024-11-20 17:13:11.204167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.058 [2024-11-20 17:13:11.204256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.058 [2024-11-20 17:13:11.204275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.058 [2024-11-20 17:13:11.204293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.058 [2024-11-20 17:13:11.204300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.058 [2024-11-20 17:13:11.204319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.058 qpair failed and we were unable to recover it. 00:30:19.058 [2024-11-20 17:13:11.214199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.058 [2024-11-20 17:13:11.214257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.058 [2024-11-20 17:13:11.214274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.058 [2024-11-20 17:13:11.214281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.058 [2024-11-20 17:13:11.214288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.058 [2024-11-20 17:13:11.214304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.058 qpair failed and we were unable to recover it. 00:30:19.058 [2024-11-20 17:13:11.224117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.058 [2024-11-20 17:13:11.224185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.058 [2024-11-20 17:13:11.224203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.058 [2024-11-20 17:13:11.224210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.058 [2024-11-20 17:13:11.224216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.058 [2024-11-20 17:13:11.224234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.058 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.234310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.234374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.234391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.234399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.234406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.234422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.244316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.244386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.244403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.244411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.244418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.244440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.254350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.254410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.254427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.254434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.254440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.254457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.264399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.264489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.264506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.264513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.264520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.264536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.274436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.274510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.274526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.274534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.274540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.274557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.284441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.284503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.284521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.284528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.284534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.284551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.294488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.294552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.294569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.294577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.294584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.294601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.304518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.304587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.304607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.304616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.304623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.304640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.322 [2024-11-20 17:13:11.314593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.322 [2024-11-20 17:13:11.314664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.322 [2024-11-20 17:13:11.314683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.322 [2024-11-20 17:13:11.314691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.322 [2024-11-20 17:13:11.314698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.322 [2024-11-20 17:13:11.314716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.322 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.324565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.324658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.324675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.324683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.324690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.324707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.334599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.334685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.334708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.334716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.334724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.334742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.344620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.344687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.344704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.344711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.344717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.344735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.354664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.354734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.354754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.354762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.354768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.354786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.364669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.364751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.364773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.364783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.364790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.364810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.374711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.374780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.374801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.374808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.374821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.374839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.384729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.384797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.384815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.384822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.384828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.384846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.394767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.394843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.394860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.394868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.394874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.394891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.404748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.404810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.404828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.404836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.404842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.404860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.414679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.414735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.414751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.414758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.414764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.414782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.424728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.424792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.424813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.424821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.424828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.424846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.434898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.435017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.435055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.435065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.435072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.435097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.323 [2024-11-20 17:13:11.444892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.323 [2024-11-20 17:13:11.444955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.323 [2024-11-20 17:13:11.444977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.323 [2024-11-20 17:13:11.444985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.323 [2024-11-20 17:13:11.444992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.323 [2024-11-20 17:13:11.445011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.323 qpair failed and we were unable to recover it. 00:30:19.324 [2024-11-20 17:13:11.454942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.324 [2024-11-20 17:13:11.455003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.324 [2024-11-20 17:13:11.455022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.324 [2024-11-20 17:13:11.455030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.324 [2024-11-20 17:13:11.455036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.324 [2024-11-20 17:13:11.455055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.324 qpair failed and we were unable to recover it. 00:30:19.324 [2024-11-20 17:13:11.464980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.324 [2024-11-20 17:13:11.465079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.324 [2024-11-20 17:13:11.465107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.324 [2024-11-20 17:13:11.465120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.324 [2024-11-20 17:13:11.465128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.324 [2024-11-20 17:13:11.465148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.324 qpair failed and we were unable to recover it. 00:30:19.324 [2024-11-20 17:13:11.475034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.324 [2024-11-20 17:13:11.475147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.324 [2024-11-20 17:13:11.475174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.324 [2024-11-20 17:13:11.475181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.324 [2024-11-20 17:13:11.475188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.324 [2024-11-20 17:13:11.475206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.324 qpair failed and we were unable to recover it. 00:30:19.324 [2024-11-20 17:13:11.485043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.324 [2024-11-20 17:13:11.485119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.324 [2024-11-20 17:13:11.485136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.324 [2024-11-20 17:13:11.485144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.324 [2024-11-20 17:13:11.485150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.324 [2024-11-20 17:13:11.485174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.324 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.495060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.495157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.495178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.495185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.495192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.495209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.505081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.505142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.505165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.505173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.505186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.505203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.515096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.515155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.515175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.515182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.515188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.515204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.525148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.525209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.525225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.525232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.525239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.525255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.535154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.535218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.535234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.535241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.535247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.535262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.545194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.545255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.545270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.545278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.545284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.545299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.555195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.555247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.555262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.555269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.555275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.555291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.565253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.565319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.565333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.565340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.565346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.565362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.575275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.575330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.575345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.575352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.575358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.575373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.585263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.585333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.585347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.585354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.585361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.585376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.595337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.595417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.587 [2024-11-20 17:13:11.595436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.587 [2024-11-20 17:13:11.595443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.587 [2024-11-20 17:13:11.595449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.587 [2024-11-20 17:13:11.595465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.587 qpair failed and we were unable to recover it. 00:30:19.587 [2024-11-20 17:13:11.605360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.587 [2024-11-20 17:13:11.605448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.605463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.605470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.605477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.605492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.615368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.615463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.615477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.615484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.615490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.615506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.625407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.625463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.625476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.625484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.625490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.625505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.635446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.635498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.635511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.635522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.635529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.635543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.645456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.645517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.645531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.645538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.645544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.645559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.655482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.655530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.655543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.655551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.655557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.655571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.665514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.665569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.665583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.665590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.665596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.665610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.675383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.675434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.675447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.675454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.675460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.675479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.685548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.685599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.685612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.685619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.685625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.685639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.695581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.695638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.695651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.695658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.695664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.695678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.705609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.705662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.705676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.705683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.705690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.705704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.715588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.715637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.715651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.715658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.715664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.715678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.725685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.725781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.588 [2024-11-20 17:13:11.725794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.588 [2024-11-20 17:13:11.725801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.588 [2024-11-20 17:13:11.725807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.588 [2024-11-20 17:13:11.725822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.588 qpair failed and we were unable to recover it. 00:30:19.588 [2024-11-20 17:13:11.735683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.588 [2024-11-20 17:13:11.735732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.589 [2024-11-20 17:13:11.735745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.589 [2024-11-20 17:13:11.735753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.589 [2024-11-20 17:13:11.735759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.589 [2024-11-20 17:13:11.735773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.589 qpair failed and we were unable to recover it. 00:30:19.589 [2024-11-20 17:13:11.745595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.589 [2024-11-20 17:13:11.745645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.589 [2024-11-20 17:13:11.745658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.589 [2024-11-20 17:13:11.745666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.589 [2024-11-20 17:13:11.745672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.589 [2024-11-20 17:13:11.745686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.589 qpair failed and we were unable to recover it. 00:30:19.589 [2024-11-20 17:13:11.755763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.589 [2024-11-20 17:13:11.755816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.589 [2024-11-20 17:13:11.755829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.589 [2024-11-20 17:13:11.755836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.589 [2024-11-20 17:13:11.755842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.589 [2024-11-20 17:13:11.755857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.589 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.765810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.765876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.765889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.765900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.765906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.765920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.775796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.775858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.775871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.775878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.775884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.775898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.785812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.785868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.785881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.785888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.785894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.785909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.795817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.795870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.795883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.795890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.795896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.795910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.805713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.805757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.805770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.805777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.805783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.805801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.815909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.815961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.815974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.815981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.815987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.816001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.825946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.826011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.826035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.826044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.826051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.826071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.835941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.835989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.836004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.836011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.836018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.836034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.845950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.845997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.846011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.846018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.846024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.846039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.855957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.856003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.852 [2024-11-20 17:13:11.856019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.852 [2024-11-20 17:13:11.856026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.852 [2024-11-20 17:13:11.856034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.852 [2024-11-20 17:13:11.856049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.852 qpair failed and we were unable to recover it. 00:30:19.852 [2024-11-20 17:13:11.866049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.852 [2024-11-20 17:13:11.866108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.866121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.866128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.866135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.866149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.876041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.876090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.876103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.876110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.876117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.876131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.886049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.886097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.886110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.886118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.886124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.886138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.896089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.896139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.896163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.896171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.896177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.896192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.906133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.906197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.906211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.906218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.906224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.906239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.916151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.916204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.916217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.916224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.916231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.916245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.926136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.926214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.926227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.926234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.926240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.926255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.936207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.936281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.936294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.936301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.936311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.936326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.946258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.946311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.946324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.946331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.946337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.946351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.956273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.956324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.956337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.956344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.956351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.956366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.966290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.966351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.966366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.966373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.966379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.966398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.976312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.976386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.976399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.976406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.853 [2024-11-20 17:13:11.976413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.853 [2024-11-20 17:13:11.976427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.853 qpair failed and we were unable to recover it. 00:30:19.853 [2024-11-20 17:13:11.986281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.853 [2024-11-20 17:13:11.986334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.853 [2024-11-20 17:13:11.986348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.853 [2024-11-20 17:13:11.986355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.854 [2024-11-20 17:13:11.986361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.854 [2024-11-20 17:13:11.986375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.854 qpair failed and we were unable to recover it. 00:30:19.854 [2024-11-20 17:13:11.996397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.854 [2024-11-20 17:13:11.996444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.854 [2024-11-20 17:13:11.996457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.854 [2024-11-20 17:13:11.996464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.854 [2024-11-20 17:13:11.996470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.854 [2024-11-20 17:13:11.996485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.854 qpair failed and we were unable to recover it. 00:30:19.854 [2024-11-20 17:13:12.006381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.854 [2024-11-20 17:13:12.006434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.854 [2024-11-20 17:13:12.006448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.854 [2024-11-20 17:13:12.006455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.854 [2024-11-20 17:13:12.006461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.854 [2024-11-20 17:13:12.006475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.854 qpair failed and we were unable to recover it. 00:30:19.854 [2024-11-20 17:13:12.016413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.854 [2024-11-20 17:13:12.016458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.854 [2024-11-20 17:13:12.016471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.854 [2024-11-20 17:13:12.016478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.854 [2024-11-20 17:13:12.016484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:19.854 [2024-11-20 17:13:12.016498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.854 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.026507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.026559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.026575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.026583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.026589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.026603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.036485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.036550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.036563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.036570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.036576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.036591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.046688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.046766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.046779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.046786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.046793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.046807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.056569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.056619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.056632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.056639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.056645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.056660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.066632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.066686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.066699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.066706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.066716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.066730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.076609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.076660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.076673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.076680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.076687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.076701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.086619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.086665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.086678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.086686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.086692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.086706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.096652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.096705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.096718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.096726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.096732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.096746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.106676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.106730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.106744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.106751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.106757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.106771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.116696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.116750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.116763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.116770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.116776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.116790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.126727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.126783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.126796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.126804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.126810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.126824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.136769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.118 [2024-11-20 17:13:12.136868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.118 [2024-11-20 17:13:12.136881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.118 [2024-11-20 17:13:12.136888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.118 [2024-11-20 17:13:12.136895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.118 [2024-11-20 17:13:12.136909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-11-20 17:13:12.146827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.146879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.146892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.146899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.146905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.146919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.156817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.156873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.156903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.156912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.156918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.156938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.166848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.166905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.166931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.166940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.166947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.166967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.176861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.176923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.176948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.176956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.176963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.176984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.186937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.187034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.187049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.187056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.187063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.187079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.196893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.196943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.196957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.196969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.196975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.196990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.206900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.206952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.206965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.206972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.206979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.206993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.216982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.217040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.217053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.217060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.217066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.217080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.227059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.227110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.227124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.227131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.227138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.227152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.237026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.237074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.237088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.237095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.237101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.237119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.246954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.247002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.247015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.247022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.247028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.247042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.257086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.257154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.257172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.257179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.257186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.257200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.267169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.119 [2024-11-20 17:13:12.267259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.119 [2024-11-20 17:13:12.267271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.119 [2024-11-20 17:13:12.267278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.119 [2024-11-20 17:13:12.267285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.119 [2024-11-20 17:13:12.267300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-11-20 17:13:12.277143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.120 [2024-11-20 17:13:12.277200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.120 [2024-11-20 17:13:12.277214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.120 [2024-11-20 17:13:12.277221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.120 [2024-11-20 17:13:12.277227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.120 [2024-11-20 17:13:12.277241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-11-20 17:13:12.287143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.120 [2024-11-20 17:13:12.287200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.120 [2024-11-20 17:13:12.287214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.120 [2024-11-20 17:13:12.287221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.120 [2024-11-20 17:13:12.287227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.120 [2024-11-20 17:13:12.287241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.383 [2024-11-20 17:13:12.297181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.383 [2024-11-20 17:13:12.297229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.383 [2024-11-20 17:13:12.297242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.383 [2024-11-20 17:13:12.297249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.383 [2024-11-20 17:13:12.297256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.383 [2024-11-20 17:13:12.297270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.383 qpair failed and we were unable to recover it. 00:30:20.383 [2024-11-20 17:13:12.307260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.383 [2024-11-20 17:13:12.307313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.383 [2024-11-20 17:13:12.307326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.383 [2024-11-20 17:13:12.307333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.383 [2024-11-20 17:13:12.307340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.383 [2024-11-20 17:13:12.307354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.383 qpair failed and we were unable to recover it. 00:30:20.383 [2024-11-20 17:13:12.317230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.383 [2024-11-20 17:13:12.317278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.383 [2024-11-20 17:13:12.317291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.383 [2024-11-20 17:13:12.317298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.383 [2024-11-20 17:13:12.317305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.383 [2024-11-20 17:13:12.317319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.383 qpair failed and we were unable to recover it. 00:30:20.383 [2024-11-20 17:13:12.327250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.327297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.327310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.327320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.327327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.327341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.337297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.337345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.337358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.337365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.337372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.337387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.347381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.347436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.347452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.347459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.347468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.347484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.357247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.357295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.357308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.357315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.357321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.357336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.367310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.367357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.367370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.367377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.367383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.367400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.377418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.377466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.377479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.377486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.377494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.377509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.387484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.387541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.387554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.387562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.387569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.387583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.397489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.397543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.397556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.397563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.397569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.397584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.407488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.407542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.407556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.407563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.407569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.407584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.417534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.417582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.417595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.417602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.417608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.417622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.427582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.427637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.427651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.427657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.427664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.427678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.437594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.437647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.437660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.437667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.384 [2024-11-20 17:13:12.437673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.384 [2024-11-20 17:13:12.437687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.384 qpair failed and we were unable to recover it. 00:30:20.384 [2024-11-20 17:13:12.447616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.384 [2024-11-20 17:13:12.447701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.384 [2024-11-20 17:13:12.447714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.384 [2024-11-20 17:13:12.447721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.447727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.447742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.457631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.457691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.457707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.457714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.457721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.457735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.467674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.467726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.467739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.467746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.467752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.467767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.477660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.477707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.477721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.477727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.477734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.477748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.487708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.487752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.487765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.487772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.487778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.487792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.497742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.497791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.497805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.497812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.497823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.497838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.507692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.507748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.507762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.507769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.507777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.507792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.517806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.517864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.517877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.517884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.517891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.517905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.527826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.527882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.527895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.527902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.527908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.527922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.537833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.537888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.537913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.537922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.537929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.537949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.385 [2024-11-20 17:13:12.547931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.385 [2024-11-20 17:13:12.548035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.385 [2024-11-20 17:13:12.548062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.385 [2024-11-20 17:13:12.548070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.385 [2024-11-20 17:13:12.548077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.385 [2024-11-20 17:13:12.548097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.385 qpair failed and we were unable to recover it. 00:30:20.648 [2024-11-20 17:13:12.557891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.648 [2024-11-20 17:13:12.557939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.648 [2024-11-20 17:13:12.557955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.648 [2024-11-20 17:13:12.557962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.648 [2024-11-20 17:13:12.557969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.648 [2024-11-20 17:13:12.557984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.648 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.567948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.567996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.568010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.568017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.568023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.568038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.577970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.578034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.578047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.578055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.578061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.578075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.588097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.588155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.588181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.588189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.588196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.588212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.598020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.598070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.598083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.598090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.598097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.598111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.608056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.608144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.608161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.608168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.608175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.608190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.618078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.618125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.618139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.618146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.618152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.618171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.628168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.628225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.628238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.628245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.628255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.628269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.638161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.638213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.638226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.638233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.638239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.638253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.648167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.648218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.648231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.648238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.648244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.648259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.658177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.658221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.658234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.658241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.658248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.658262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.668270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.668331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.668345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.668352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.668359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.668373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.678236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.678290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.678303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.678311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.678317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.649 [2024-11-20 17:13:12.678331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.649 qpair failed and we were unable to recover it. 00:30:20.649 [2024-11-20 17:13:12.688268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.649 [2024-11-20 17:13:12.688317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.649 [2024-11-20 17:13:12.688329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.649 [2024-11-20 17:13:12.688336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.649 [2024-11-20 17:13:12.688343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.688357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.698318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.698361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.698374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.698381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.698387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.698402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.708397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.708452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.708465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.708472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.708479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.708493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.718283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.718352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.718368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.718375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.718381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.718395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.728397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.728447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.728461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.728468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.728475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.728489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.738285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.738336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.738349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.738356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.738363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.738377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.748460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.748515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.748528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.748535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.748541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.748555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.758475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.758526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.758539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.758549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.758556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.758570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.768514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.768565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.768578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.768585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.768591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.768605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.778532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.778582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.778595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.778602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.778608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.778622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.788602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.788656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.788669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.788676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.788683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.788696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.798594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.798644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.798657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.798664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.798670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.798688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.808600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.808655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.808668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.808675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.650 [2024-11-20 17:13:12.808682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.650 [2024-11-20 17:13:12.808696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.650 qpair failed and we were unable to recover it. 00:30:20.650 [2024-11-20 17:13:12.818640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.650 [2024-11-20 17:13:12.818689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.650 [2024-11-20 17:13:12.818702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.650 [2024-11-20 17:13:12.818709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.651 [2024-11-20 17:13:12.818715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.651 [2024-11-20 17:13:12.818729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.651 qpair failed and we were unable to recover it. 00:30:20.913 [2024-11-20 17:13:12.828695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.913 [2024-11-20 17:13:12.828745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.913 [2024-11-20 17:13:12.828758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.913 [2024-11-20 17:13:12.828765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.913 [2024-11-20 17:13:12.828772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.913 [2024-11-20 17:13:12.828786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-11-20 17:13:12.838710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.913 [2024-11-20 17:13:12.838812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.913 [2024-11-20 17:13:12.838827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.913 [2024-11-20 17:13:12.838836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.913 [2024-11-20 17:13:12.838843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.913 [2024-11-20 17:13:12.838858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-11-20 17:13:12.848772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.913 [2024-11-20 17:13:12.848840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.913 [2024-11-20 17:13:12.848854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.848861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.848867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.848881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.858745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.858792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.858805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.858812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.858818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.858832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.868805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.868885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.868898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.868905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.868911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.868926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.878792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.878841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.878854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.878862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.878868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.878882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.888724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.888769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.888782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.888792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.888799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.888813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.898853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.898897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.898910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.898917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.898924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.898937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.908926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.908986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.908999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.909006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.909012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.909027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.918936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.918988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.919001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.919008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.919015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.919028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.928815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.928863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.928876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.928883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.928890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.928907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.939001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.939052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.939065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.939072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.939078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.939092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.949046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.949096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.949109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.949116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.949122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.949137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.959038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.959094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.959107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.959114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.959120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.959134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.969040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.969083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.969096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.914 [2024-11-20 17:13:12.969103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.914 [2024-11-20 17:13:12.969109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.914 [2024-11-20 17:13:12.969123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-11-20 17:13:12.979057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.914 [2024-11-20 17:13:12.979102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.914 [2024-11-20 17:13:12.979116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:12.979123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:12.979129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:12.979143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:12.989139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:12.989197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:12.989210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:12.989217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:12.989224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:12.989238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:12.999143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:12.999194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:12.999219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:12.999226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:12.999232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:12.999252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.009164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.009269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.009283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.009290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.009297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.009311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.019182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.019231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.019247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.019255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.019261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.019275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.029255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.029312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.029325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.029332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.029338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.029352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.039255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.039305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.039318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.039325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.039331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.039345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.049246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.049288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.049301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.049308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.049315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.049329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.059310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.059358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.059371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.059378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.059387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.059402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.069381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.069433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.069445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.069452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.069459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.069473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-11-20 17:13:13.079426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.915 [2024-11-20 17:13:13.079505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.915 [2024-11-20 17:13:13.079518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.915 [2024-11-20 17:13:13.079525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.915 [2024-11-20 17:13:13.079531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:20.915 [2024-11-20 17:13:13.079545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.915 qpair failed and we were unable to recover it. 00:30:21.178 [2024-11-20 17:13:13.089390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.178 [2024-11-20 17:13:13.089442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.178 [2024-11-20 17:13:13.089456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.178 [2024-11-20 17:13:13.089463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.178 [2024-11-20 17:13:13.089469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.178 [2024-11-20 17:13:13.089483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.178 qpair failed and we were unable to recover it. 00:30:21.178 [2024-11-20 17:13:13.099399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.178 [2024-11-20 17:13:13.099447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.178 [2024-11-20 17:13:13.099459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.178 [2024-11-20 17:13:13.099466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.178 [2024-11-20 17:13:13.099473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.178 [2024-11-20 17:13:13.099487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.178 qpair failed and we were unable to recover it. 00:30:21.178 [2024-11-20 17:13:13.109483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.178 [2024-11-20 17:13:13.109558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.178 [2024-11-20 17:13:13.109571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.178 [2024-11-20 17:13:13.109578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.109584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.109598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.119497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.119604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.119616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.119623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.119630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.119644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.129487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.129533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.129545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.129553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.129559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.129573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.139481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.139533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.139546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.139552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.139559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.139573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.149587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.149640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.149656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.149663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.149669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.149683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.159574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.159630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.159643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.159650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.159657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.159670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.169584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.169636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.169649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.169656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.169662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.169676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.179620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.179703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.179716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.179723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.179729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.179743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.189690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.189761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.189775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.189782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.189792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.189810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.199685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.199738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.199751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.199759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.199765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.199779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.209590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.209674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.209689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.209696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.209703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.209718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.219731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.219777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.219791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.219798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.219804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.219818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.229672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.229736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.229749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.229756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.179 [2024-11-20 17:13:13.229762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.179 [2024-11-20 17:13:13.229777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.179 qpair failed and we were unable to recover it. 00:30:21.179 [2024-11-20 17:13:13.239778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.179 [2024-11-20 17:13:13.239837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.179 [2024-11-20 17:13:13.239851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.179 [2024-11-20 17:13:13.239858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.239864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.239878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.249810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.249879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.249892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.249899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.249905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.249920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.259834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.259882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.259895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.259902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.259908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.259922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.269882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.269937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.269950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.269956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.269963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.269977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.279894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.279982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.279995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.280002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.280009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.280023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.289903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.289954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.289979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.289988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.289995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.290015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.299934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.300000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.300016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.300023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.300030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.300045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.310013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.310066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.310080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.310087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.310093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.310108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.319984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.320032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.320045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.320061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.320067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.320082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.329986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.330034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.330048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.330057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.330064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.330081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.340040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.340092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.340106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.340113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.340119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.340135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.180 [2024-11-20 17:13:13.350117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.180 [2024-11-20 17:13:13.350174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.180 [2024-11-20 17:13:13.350188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.180 [2024-11-20 17:13:13.350195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.180 [2024-11-20 17:13:13.350201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.180 [2024-11-20 17:13:13.350216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.180 qpair failed and we were unable to recover it. 00:30:21.443 [2024-11-20 17:13:13.360099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.443 [2024-11-20 17:13:13.360154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.443 [2024-11-20 17:13:13.360171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.443 [2024-11-20 17:13:13.360179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.443 [2024-11-20 17:13:13.360185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.443 [2024-11-20 17:13:13.360203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.443 qpair failed and we were unable to recover it. 00:30:21.443 [2024-11-20 17:13:13.370083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.443 [2024-11-20 17:13:13.370125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.443 [2024-11-20 17:13:13.370138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.443 [2024-11-20 17:13:13.370145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.443 [2024-11-20 17:13:13.370151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.443 [2024-11-20 17:13:13.370170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.443 qpair failed and we were unable to recover it. 00:30:21.443 [2024-11-20 17:13:13.380036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.443 [2024-11-20 17:13:13.380085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.443 [2024-11-20 17:13:13.380098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.443 [2024-11-20 17:13:13.380105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.443 [2024-11-20 17:13:13.380112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.443 [2024-11-20 17:13:13.380126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.443 qpair failed and we were unable to recover it. 00:30:21.443 [2024-11-20 17:13:13.390233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.443 [2024-11-20 17:13:13.390311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.443 [2024-11-20 17:13:13.390324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.443 [2024-11-20 17:13:13.390331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.443 [2024-11-20 17:13:13.390337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.443 [2024-11-20 17:13:13.390351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.443 qpair failed and we were unable to recover it. 00:30:21.443 [2024-11-20 17:13:13.400218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.443 [2024-11-20 17:13:13.400308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.400320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.400328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.400334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.400348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.410123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.410220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.410236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.410243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.410250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.410265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.420270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.420321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.420335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.420342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.420348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.420363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.430303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.430358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.430371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.430378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.430384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.430399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.440301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.440354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.440367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.440374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.440381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.440395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.450393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.450465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.450478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.450489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.450495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.450510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.460391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.460437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.460450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.460457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.460463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.460478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.470425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.470479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.470492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.470499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.470505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.470519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.480451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.480499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.480513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.480520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.480527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.480541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.490488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.444 [2024-11-20 17:13:13.490582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.444 [2024-11-20 17:13:13.490595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.444 [2024-11-20 17:13:13.490603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.444 [2024-11-20 17:13:13.490609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.444 [2024-11-20 17:13:13.490628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.444 qpair failed and we were unable to recover it. 00:30:21.444 [2024-11-20 17:13:13.500491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.500537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.500549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.500557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.500563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.500577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.510554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.510608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.510621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.510628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.510635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.510649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.520540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.520588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.520600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.520608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.520614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.520628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.530576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.530620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.530633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.530640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.530646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.530660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.540605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.540654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.540667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.540674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.540681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.540694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.550571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.550623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.550637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.550644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.550650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.550665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.560652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.560706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.560719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.560726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.560733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.560747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.570691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.570779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.570792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.570799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.570806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.570819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.580707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.580752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.580769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.580775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.580782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.580795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.590773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.590826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.590839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.590846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.445 [2024-11-20 17:13:13.590852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.445 [2024-11-20 17:13:13.590866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.445 qpair failed and we were unable to recover it. 00:30:21.445 [2024-11-20 17:13:13.600782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.445 [2024-11-20 17:13:13.600834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.445 [2024-11-20 17:13:13.600847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.445 [2024-11-20 17:13:13.600854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-11-20 17:13:13.600860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.446 [2024-11-20 17:13:13.600874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-11-20 17:13:13.610777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-11-20 17:13:13.610822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-11-20 17:13:13.610835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-11-20 17:13:13.610842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-11-20 17:13:13.610849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.446 [2024-11-20 17:13:13.610863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.620803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.620865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.620879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.620886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.620896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.620910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.630865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.630923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.630948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.630956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.630963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.630983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.640863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.640916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.640941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.640950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.640957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.640977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.650891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.650937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.650952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.650960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.650966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.650982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.660959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.661037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.661051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.661058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.661064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.661080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.670997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.671054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.671067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.671074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.671081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.671095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.680967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.681015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-11-20 17:13:13.681028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-11-20 17:13:13.681035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-11-20 17:13:13.681042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.709 [2024-11-20 17:13:13.681056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-11-20 17:13:13.690988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-11-20 17:13:13.691036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.691049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.691056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.691062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.691077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.701004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.701050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.701063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.701070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.701076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.701090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.710966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.711024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.711044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.711051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.711057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.711073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.721076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.721130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.721144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.721151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.721157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.721176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.731049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.731126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.731140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.731147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.731153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.731171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.741117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.741164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.741177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.741184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.741191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.741205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.751216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.751270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.751283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.751290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.751300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.751315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.761162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.761214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.761227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.761234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.761240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.761255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.771223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.771270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.771283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.771290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.771297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.771311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.781230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.781278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.781291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.781298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.781304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.781319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.791330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.791386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.791399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.791406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.791412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.791426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.801312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.801409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.801422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.801429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.801435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.801449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.811312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.811363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-11-20 17:13:13.811377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-11-20 17:13:13.811383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-11-20 17:13:13.811390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.710 [2024-11-20 17:13:13.811404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-11-20 17:13:13.821350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-11-20 17:13:13.821435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.711 [2024-11-20 17:13:13.821449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.711 [2024-11-20 17:13:13.821457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.711 [2024-11-20 17:13:13.821465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.711 [2024-11-20 17:13:13.821483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-11-20 17:13:13.831468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.711 [2024-11-20 17:13:13.831520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.711 [2024-11-20 17:13:13.831533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.711 [2024-11-20 17:13:13.831540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.711 [2024-11-20 17:13:13.831547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.711 [2024-11-20 17:13:13.831561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-11-20 17:13:13.841407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.711 [2024-11-20 17:13:13.841471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.711 [2024-11-20 17:13:13.841484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.711 [2024-11-20 17:13:13.841491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.711 [2024-11-20 17:13:13.841497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.711 [2024-11-20 17:13:13.841511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-11-20 17:13:13.851439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.711 [2024-11-20 17:13:13.851485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.711 [2024-11-20 17:13:13.851499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.711 [2024-11-20 17:13:13.851506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.711 [2024-11-20 17:13:13.851512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.711 [2024-11-20 17:13:13.851526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-11-20 17:13:13.861490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.711 [2024-11-20 17:13:13.861539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.711 [2024-11-20 17:13:13.861552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.711 [2024-11-20 17:13:13.861559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.711 [2024-11-20 17:13:13.861565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.711 [2024-11-20 17:13:13.861579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-11-20 17:13:13.871528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.711 [2024-11-20 17:13:13.871580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.711 [2024-11-20 17:13:13.871593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.711 [2024-11-20 17:13:13.871600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.711 [2024-11-20 17:13:13.871606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.711 [2024-11-20 17:13:13.871621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.974 [2024-11-20 17:13:13.881499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.974 [2024-11-20 17:13:13.881547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.974 [2024-11-20 17:13:13.881560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.974 [2024-11-20 17:13:13.881571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.974 [2024-11-20 17:13:13.881577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.974 [2024-11-20 17:13:13.881591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.974 qpair failed and we were unable to recover it. 00:30:21.974 [2024-11-20 17:13:13.891537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.974 [2024-11-20 17:13:13.891598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.974 [2024-11-20 17:13:13.891611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.974 [2024-11-20 17:13:13.891618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.974 [2024-11-20 17:13:13.891624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.891638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.901562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.901617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.901630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.901636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.901643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.901658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.911660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.911728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.911741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.911748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.911755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.911769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.921610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.921661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.921674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.921681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.921687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.921704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.931651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.931700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.931714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.931721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.931727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.931741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.941690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.941741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.941754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.941761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.941767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.941781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.951774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.951828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.951841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.951848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.951855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.951868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.961758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.961806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.961818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.961825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.961831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.961845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.971736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.971790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.971803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.971810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.971816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.971831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.981809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.981859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.981873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.981880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.981886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.981900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:13.991849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:13.991908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:13.991921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:13.991928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:13.991934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:13.991948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:14.001859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:14.001913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:14.001926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:14.001933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:14.001939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:14.001953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:14.011843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:14.011896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:14.011922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:14.011935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:14.011942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.975 [2024-11-20 17:13:14.011962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.975 qpair failed and we were unable to recover it. 00:30:21.975 [2024-11-20 17:13:14.021881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.975 [2024-11-20 17:13:14.021929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.975 [2024-11-20 17:13:14.021954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.975 [2024-11-20 17:13:14.021963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.975 [2024-11-20 17:13:14.021970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.021990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.031983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.032040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.032065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.032074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.032081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.032101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.041977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.042028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.042043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.042051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.042058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.042073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.051964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.052013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.052026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.052033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.052040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.052060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.061988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.062032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.062045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.062053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.062059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.062074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.071981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.072029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.072042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.072049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.072055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.072070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.082043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.082091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.082104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.082111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.082117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.082132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.092060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.092107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.092121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.092129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.092136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.092151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.102064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.102148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.102166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.102174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.102180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.102195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.112116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.112205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.112218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.112225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.112232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.112246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.122151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.122207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.122220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.122227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.122234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.122248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.132179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.132248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.132262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.132269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.132275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.132289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:21.976 [2024-11-20 17:13:14.142169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.976 [2024-11-20 17:13:14.142214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.976 [2024-11-20 17:13:14.142230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.976 [2024-11-20 17:13:14.142238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.976 [2024-11-20 17:13:14.142244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:21.976 [2024-11-20 17:13:14.142258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.976 qpair failed and we were unable to recover it. 00:30:22.241 [2024-11-20 17:13:14.152275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.241 [2024-11-20 17:13:14.152367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.241 [2024-11-20 17:13:14.152381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.241 [2024-11-20 17:13:14.152388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.241 [2024-11-20 17:13:14.152394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.241 [2024-11-20 17:13:14.152408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.241 qpair failed and we were unable to recover it. 00:30:22.241 [2024-11-20 17:13:14.162139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.241 [2024-11-20 17:13:14.162188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.241 [2024-11-20 17:13:14.162202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.241 [2024-11-20 17:13:14.162209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.241 [2024-11-20 17:13:14.162215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.241 [2024-11-20 17:13:14.162229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.241 qpair failed and we were unable to recover it. 00:30:22.241 [2024-11-20 17:13:14.172288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.241 [2024-11-20 17:13:14.172334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.241 [2024-11-20 17:13:14.172347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.241 [2024-11-20 17:13:14.172354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.241 [2024-11-20 17:13:14.172360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.241 [2024-11-20 17:13:14.172374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.241 qpair failed and we were unable to recover it. 00:30:22.241 [2024-11-20 17:13:14.182330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.241 [2024-11-20 17:13:14.182370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.241 [2024-11-20 17:13:14.182383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.241 [2024-11-20 17:13:14.182390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.241 [2024-11-20 17:13:14.182400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.241 [2024-11-20 17:13:14.182415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.241 qpair failed and we were unable to recover it. 00:30:22.241 [2024-11-20 17:13:14.192440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.241 [2024-11-20 17:13:14.192513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.241 [2024-11-20 17:13:14.192526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.241 [2024-11-20 17:13:14.192533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.241 [2024-11-20 17:13:14.192539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.192553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.202382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.202429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.202442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.202449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.202455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.202469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.212270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.212314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.212329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.212336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.212343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.212361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.222396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.222445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.222459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.222466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.222472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.222486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.232455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.232501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.232515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.232522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.232528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.232543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.242484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.242536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.242549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.242556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.242563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.242577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.252494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.252537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.252550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.252557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.252564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.252578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.262530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.262579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.262592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.262599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.262605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.262619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.272553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.272599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.272615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.272623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.272629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.272643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.282553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.282596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.282609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.282616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.282622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.282636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.292608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.292654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.292666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.292673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.292679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.292694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.302627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.302670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.302684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.302691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.302698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.302712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.312658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.312705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.312721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.312730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.312742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.312758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.242 qpair failed and we were unable to recover it. 00:30:22.242 [2024-11-20 17:13:14.322693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.242 [2024-11-20 17:13:14.322738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.242 [2024-11-20 17:13:14.322751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.242 [2024-11-20 17:13:14.322758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.242 [2024-11-20 17:13:14.322764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.242 [2024-11-20 17:13:14.322778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.332741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.332829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.332842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.332849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.332855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.332869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.342758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.342803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.342817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.342825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.342832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.342847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.352784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.352833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.352846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.352853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.352860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.352874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.362937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.362994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.363019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.363028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.363035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.363055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.372826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.372880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.372895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.372903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.372909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.372925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.382846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.382897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.382911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.382918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.382925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.382939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.392889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.392934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.392948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.392954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.392961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.392975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.402922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.403013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.403026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.403034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.403040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.403054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.243 [2024-11-20 17:13:14.412937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.243 [2024-11-20 17:13:14.412988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.243 [2024-11-20 17:13:14.413013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.243 [2024-11-20 17:13:14.413022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.243 [2024-11-20 17:13:14.413029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.243 [2024-11-20 17:13:14.413049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.243 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 17:13:14.422966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.506 [2024-11-20 17:13:14.423027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.506 [2024-11-20 17:13:14.423042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.506 [2024-11-20 17:13:14.423049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.506 [2024-11-20 17:13:14.423056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.506 [2024-11-20 17:13:14.423071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 17:13:14.433001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.506 [2024-11-20 17:13:14.433048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.506 [2024-11-20 17:13:14.433061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.506 [2024-11-20 17:13:14.433069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.506 [2024-11-20 17:13:14.433075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.506 [2024-11-20 17:13:14.433090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 17:13:14.443035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.506 [2024-11-20 17:13:14.443102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.506 [2024-11-20 17:13:14.443115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.506 [2024-11-20 17:13:14.443126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.506 [2024-11-20 17:13:14.443133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.506 [2024-11-20 17:13:14.443147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 17:13:14.453089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.453136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.453150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.453157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.453169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.453184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.463074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.463126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.463139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.463146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.463152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.463170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.473074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.473157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.473173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.473180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.473187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.473201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.483132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.483192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.483205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.483212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.483219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.483237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.493122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.493171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.493184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.493191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.493197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.493212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.503184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.503227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.503240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.503247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.503254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.503268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.513209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.513253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.513267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.513274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.513280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.513295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.523227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.523323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.523337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.523344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.523351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.523365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.533266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.533358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.533371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.533379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.533385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.533399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.543167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.543222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.543235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.543243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.543249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.543263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.553340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.553386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.553399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.553406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.553413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.553427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.563343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.563395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.563408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.563415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.563422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.563436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.573347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.507 [2024-11-20 17:13:14.573390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.507 [2024-11-20 17:13:14.573406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.507 [2024-11-20 17:13:14.573414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.507 [2024-11-20 17:13:14.573420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.507 [2024-11-20 17:13:14.573435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 17:13:14.583405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.583453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.583466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.583473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.583479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.583493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.593436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.593530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.593543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.593550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.593556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.593570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.603498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.603545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.603558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.603565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.603571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.603585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.613476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.613528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.613541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.613548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.613555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.613572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.623486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.623531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.623545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.623552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.623558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.623572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.633536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.633582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.633595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.633602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.633609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.633623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.643569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.643621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.643634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.643641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.643647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.643661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.653605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.653723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.653736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.653744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.653750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.653764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.663552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.663601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.663614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.663622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.663628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.663642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 17:13:14.673672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.508 [2024-11-20 17:13:14.673720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.508 [2024-11-20 17:13:14.673733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.508 [2024-11-20 17:13:14.673740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.508 [2024-11-20 17:13:14.673746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.508 [2024-11-20 17:13:14.673760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.771 [2024-11-20 17:13:14.683666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.771 [2024-11-20 17:13:14.683753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.771 [2024-11-20 17:13:14.683766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.771 [2024-11-20 17:13:14.683773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.771 [2024-11-20 17:13:14.683780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.771 [2024-11-20 17:13:14.683794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.771 qpair failed and we were unable to recover it. 00:30:22.771 [2024-11-20 17:13:14.693726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.771 [2024-11-20 17:13:14.693768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.693781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.693788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.693795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.693809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.703700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.703744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.703761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.703768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.703774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.703788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.713739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.713784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.713798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.713804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.713811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.713824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.723643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.723691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.723705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.723712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.723718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.723732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.733789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.733867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.733880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.733887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.733893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.733907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.743818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.743874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.743888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.743895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.743905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.743919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.753830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.753884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.753908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.753917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.753925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.753945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.763843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.763895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.763920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.763929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.763936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.763956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.773895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.773948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.773973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.772 [2024-11-20 17:13:14.773982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.772 [2024-11-20 17:13:14.773989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.772 [2024-11-20 17:13:14.774008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.772 qpair failed and we were unable to recover it. 00:30:22.772 [2024-11-20 17:13:14.783933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.772 [2024-11-20 17:13:14.783977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.772 [2024-11-20 17:13:14.783992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.784000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.784006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.784022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.793828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.793875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.793889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.793897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.793903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.793918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.804040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.804110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.804125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.804135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.804142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.804172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.813979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.814024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.814037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.814044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.814051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.814065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.824015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.824076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.824089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.824096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.824102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.824117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.834082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.834138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.834162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.834170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.834176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.834191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.844136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.844183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.844197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.844205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.844212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.844227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.854115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.854163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.854177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.854184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.854190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.854204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.864011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.864061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.773 [2024-11-20 17:13:14.864074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.773 [2024-11-20 17:13:14.864081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.773 [2024-11-20 17:13:14.864088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.773 [2024-11-20 17:13:14.864102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.773 qpair failed and we were unable to recover it. 00:30:22.773 [2024-11-20 17:13:14.874151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.773 [2024-11-20 17:13:14.874204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.874218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.874228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.874235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.874249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:22.774 [2024-11-20 17:13:14.884149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.774 [2024-11-20 17:13:14.884198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.884212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.884219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.884225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.884239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:22.774 [2024-11-20 17:13:14.894210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.774 [2024-11-20 17:13:14.894256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.894270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.894277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.894284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.894298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:22.774 [2024-11-20 17:13:14.904242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.774 [2024-11-20 17:13:14.904293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.904307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.904313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.904320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.904334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:22.774 [2024-11-20 17:13:14.914260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.774 [2024-11-20 17:13:14.914309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.914322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.914329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.914335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.914350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:22.774 [2024-11-20 17:13:14.924294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.774 [2024-11-20 17:13:14.924341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.924354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.924361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.924367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.924381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:22.774 [2024-11-20 17:13:14.934185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.774 [2024-11-20 17:13:14.934230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.774 [2024-11-20 17:13:14.934243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.774 [2024-11-20 17:13:14.934250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.774 [2024-11-20 17:13:14.934256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:22.774 [2024-11-20 17:13:14.934270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.774 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:14.944341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:14.944384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:14.944397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:14.944404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:14.944411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:14.944425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:14.954392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:14.954435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:14.954448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:14.954455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:14.954461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:14.954476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:14.964400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:14.964457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:14.964471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:14.964478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:14.964484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:14.964498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:14.974428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:14.974471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:14.974484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:14.974491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:14.974497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:14.974511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:14.984462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:14.984530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:14.984543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:14.984550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:14.984556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:14.984570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:14.994503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:14.994550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:14.994564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:14.994571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:14.994577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:14.994591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:15.004523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:15.004575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:15.004588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:15.004599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:15.004605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:15.004619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:15.014543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:15.014589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:15.014602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:15.014609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:15.014615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:15.014629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:15.024563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:15.024607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:15.024620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:15.024627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:15.024634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:15.024647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:15.034587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:15.034633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:15.034646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:15.034653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:15.034659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:15.034673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:15.044500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:15.044554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:15.044567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:15.044574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:15.044580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:15.044598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.037 qpair failed and we were unable to recover it. 00:30:23.037 [2024-11-20 17:13:15.054621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.037 [2024-11-20 17:13:15.054662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.037 [2024-11-20 17:13:15.054676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.037 [2024-11-20 17:13:15.054683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.037 [2024-11-20 17:13:15.054690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.037 [2024-11-20 17:13:15.054704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.064638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.064685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.064699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.064706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.064713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.038 [2024-11-20 17:13:15.064727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.074644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.074687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.074700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.074707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.074714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.038 [2024-11-20 17:13:15.074728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.084716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.084762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.084776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.084783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.084789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.038 [2024-11-20 17:13:15.084803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.094752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.094805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.094819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.094827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.094834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3690000b90 00:30:23.038 [2024-11-20 17:13:15.094850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.104831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.104953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.105017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.105042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.105063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3694000b90 00:30:23.038 [2024-11-20 17:13:15.105118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.114811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.114892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.114939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.114958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.114974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3694000b90 00:30:23.038 [2024-11-20 17:13:15.115015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.115479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ee00 is same with the state(6) to be set 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Write completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 Read completed with error (sct=0, sc=8) 00:30:23.038 starting I/O failed 00:30:23.038 [2024-11-20 17:13:15.116315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.038 [2024-11-20 17:13:15.124822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.124960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.125024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.125048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.125069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc290c0 00:30:23.038 [2024-11-20 17:13:15.125123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.134838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.134917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.134965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.134984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.134999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc290c0 00:30:23.038 [2024-11-20 17:13:15.135039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.038 qpair failed and we were unable to recover it. 00:30:23.038 [2024-11-20 17:13:15.144904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.038 [2024-11-20 17:13:15.144996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.038 [2024-11-20 17:13:15.145060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.038 [2024-11-20 17:13:15.145085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.038 [2024-11-20 17:13:15.145108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f369c000b90 00:30:23.038 [2024-11-20 17:13:15.145175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.039 qpair failed and we were unable to recover it. 00:30:23.039 [2024-11-20 17:13:15.154934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.039 [2024-11-20 17:13:15.155016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.039 [2024-11-20 17:13:15.155071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.039 [2024-11-20 17:13:15.155091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.039 [2024-11-20 17:13:15.155106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f369c000b90 00:30:23.039 [2024-11-20 17:13:15.155148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.039 qpair failed and we were unable to recover it. 00:30:23.039 [2024-11-20 17:13:15.155684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ee00 (9): Bad file descriptor 00:30:23.039 Initializing NVMe Controllers 00:30:23.039 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:23.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:23.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:23.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:23.039 Initialization complete. Launching workers. 00:30:23.039 Starting thread on core 1 00:30:23.039 Starting thread on core 2 00:30:23.039 Starting thread on core 3 00:30:23.039 Starting thread on core 0 00:30:23.039 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:23.039 00:30:23.039 real 0m11.411s 00:30:23.039 user 0m21.904s 00:30:23.039 sys 0m3.887s 00:30:23.039 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.039 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.039 ************************************ 00:30:23.039 END TEST nvmf_target_disconnect_tc2 00:30:23.039 ************************************ 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.300 rmmod nvme_tcp 00:30:23.300 rmmod nvme_fabrics 00:30:23.300 rmmod nvme_keyring 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2155120 ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2155120 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2155120 ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2155120 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155120 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155120' 00:30:23.300 killing process with pid 2155120 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2155120 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2155120 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.300 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.561 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.561 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.561 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.561 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.561 17:13:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.479 17:13:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.479 00:30:25.479 real 0m21.762s 00:30:25.479 user 0m49.658s 00:30:25.479 sys 0m10.034s 00:30:25.479 17:13:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.479 17:13:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:25.479 ************************************ 00:30:25.479 END TEST nvmf_target_disconnect 00:30:25.479 ************************************ 00:30:25.479 17:13:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:25.479 00:30:25.479 real 6m35.163s 00:30:25.479 user 11m26.993s 00:30:25.479 sys 2m16.507s 00:30:25.479 17:13:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.479 17:13:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.479 ************************************ 00:30:25.479 END TEST nvmf_host 00:30:25.479 ************************************ 00:30:25.479 17:13:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:25.479 17:13:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:25.479 17:13:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:25.479 17:13:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:25.479 17:13:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.479 17:13:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.741 ************************************ 00:30:25.741 START TEST nvmf_target_core_interrupt_mode 00:30:25.741 ************************************ 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:25.741 * Looking for test storage... 00:30:25.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.741 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.742 --rc genhtml_branch_coverage=1 00:30:25.742 --rc genhtml_function_coverage=1 00:30:25.742 --rc genhtml_legend=1 00:30:25.742 --rc geninfo_all_blocks=1 00:30:25.742 --rc geninfo_unexecuted_blocks=1 00:30:25.742 00:30:25.742 ' 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.742 --rc genhtml_branch_coverage=1 00:30:25.742 --rc genhtml_function_coverage=1 00:30:25.742 --rc genhtml_legend=1 00:30:25.742 --rc geninfo_all_blocks=1 00:30:25.742 --rc geninfo_unexecuted_blocks=1 00:30:25.742 00:30:25.742 ' 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.742 --rc genhtml_branch_coverage=1 00:30:25.742 --rc genhtml_function_coverage=1 00:30:25.742 --rc genhtml_legend=1 00:30:25.742 --rc geninfo_all_blocks=1 00:30:25.742 --rc geninfo_unexecuted_blocks=1 00:30:25.742 00:30:25.742 ' 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.742 --rc genhtml_branch_coverage=1 00:30:25.742 --rc genhtml_function_coverage=1 00:30:25.742 --rc genhtml_legend=1 00:30:25.742 --rc geninfo_all_blocks=1 00:30:25.742 --rc geninfo_unexecuted_blocks=1 00:30:25.742 00:30:25.742 ' 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.742 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.004 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.005 ************************************ 00:30:26.005 START TEST nvmf_abort 00:30:26.005 ************************************ 00:30:26.005 17:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:26.005 * Looking for test storage... 00:30:26.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.005 --rc genhtml_branch_coverage=1 00:30:26.005 --rc genhtml_function_coverage=1 00:30:26.005 --rc genhtml_legend=1 00:30:26.005 --rc geninfo_all_blocks=1 00:30:26.005 --rc geninfo_unexecuted_blocks=1 00:30:26.005 00:30:26.005 ' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.005 --rc genhtml_branch_coverage=1 00:30:26.005 --rc genhtml_function_coverage=1 00:30:26.005 --rc genhtml_legend=1 00:30:26.005 --rc geninfo_all_blocks=1 00:30:26.005 --rc geninfo_unexecuted_blocks=1 00:30:26.005 00:30:26.005 ' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.005 --rc genhtml_branch_coverage=1 00:30:26.005 --rc genhtml_function_coverage=1 00:30:26.005 --rc genhtml_legend=1 00:30:26.005 --rc geninfo_all_blocks=1 00:30:26.005 --rc geninfo_unexecuted_blocks=1 00:30:26.005 00:30:26.005 ' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.005 --rc genhtml_branch_coverage=1 00:30:26.005 --rc genhtml_function_coverage=1 00:30:26.005 --rc genhtml_legend=1 00:30:26.005 --rc geninfo_all_blocks=1 00:30:26.005 --rc geninfo_unexecuted_blocks=1 00:30:26.005 00:30:26.005 ' 00:30:26.005 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.268 17:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.415 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:34.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:34.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:34.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:34.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:30:34.416 00:30:34.416 --- 10.0.0.2 ping statistics --- 00:30:34.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.416 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:30:34.416 00:30:34.416 --- 10.0.0.1 ping statistics --- 00:30:34.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.416 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.416 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2160714 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2160714 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2160714 ']' 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.417 17:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.417 [2024-11-20 17:13:25.823167] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:34.417 [2024-11-20 17:13:25.824302] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:30:34.417 [2024-11-20 17:13:25.824355] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.417 [2024-11-20 17:13:25.925501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:34.417 [2024-11-20 17:13:25.977088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.417 [2024-11-20 17:13:25.977142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.417 [2024-11-20 17:13:25.977157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.417 [2024-11-20 17:13:25.977172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.417 [2024-11-20 17:13:25.977178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.417 [2024-11-20 17:13:25.979064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.417 [2024-11-20 17:13:25.979229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.417 [2024-11-20 17:13:25.979229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.417 [2024-11-20 17:13:26.056994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:34.417 [2024-11-20 17:13:26.057913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:34.417 [2024-11-20 17:13:26.058414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.417 [2024-11-20 17:13:26.058561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.679 [2024-11-20 17:13:26.752119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.679 Malloc0 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.679 Delay0 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.679 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.940 [2024-11-20 17:13:26.856122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.940 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.940 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:34.941 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.941 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:34.941 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.941 17:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:34.941 [2024-11-20 17:13:26.983910] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:36.855 Initializing NVMe Controllers 00:30:36.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:36.855 controller IO queue size 128 less than required 00:30:36.855 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:36.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:36.855 Initialization complete. Launching workers. 00:30:36.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28242 00:30:36.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28299, failed to submit 66 00:30:36.855 success 28242, unsuccessful 57, failed 0 00:30:36.855 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.855 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.855 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.116 rmmod nvme_tcp 00:30:37.116 rmmod nvme_fabrics 00:30:37.116 rmmod nvme_keyring 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2160714 ']' 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2160714 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2160714 ']' 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2160714 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160714 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160714' 00:30:37.116 killing process with pid 2160714 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2160714 00:30:37.116 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2160714 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.377 17:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.295 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.295 00:30:39.295 real 0m13.479s 00:30:39.295 user 0m10.941s 00:30:39.295 sys 0m6.911s 00:30:39.295 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.295 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.295 ************************************ 00:30:39.295 END TEST nvmf_abort 00:30:39.295 ************************************ 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.558 ************************************ 00:30:39.558 START TEST nvmf_ns_hotplug_stress 00:30:39.558 ************************************ 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:39.558 * Looking for test storage... 00:30:39.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:39.558 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:39.820 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.820 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:39.820 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.821 --rc genhtml_branch_coverage=1 00:30:39.821 --rc genhtml_function_coverage=1 00:30:39.821 --rc genhtml_legend=1 00:30:39.821 --rc geninfo_all_blocks=1 00:30:39.821 --rc geninfo_unexecuted_blocks=1 00:30:39.821 00:30:39.821 ' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.821 --rc genhtml_branch_coverage=1 00:30:39.821 --rc genhtml_function_coverage=1 00:30:39.821 --rc genhtml_legend=1 00:30:39.821 --rc geninfo_all_blocks=1 00:30:39.821 --rc geninfo_unexecuted_blocks=1 00:30:39.821 00:30:39.821 ' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.821 --rc genhtml_branch_coverage=1 00:30:39.821 --rc genhtml_function_coverage=1 00:30:39.821 --rc genhtml_legend=1 00:30:39.821 --rc geninfo_all_blocks=1 00:30:39.821 --rc geninfo_unexecuted_blocks=1 00:30:39.821 00:30:39.821 ' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.821 --rc genhtml_branch_coverage=1 00:30:39.821 --rc genhtml_function_coverage=1 00:30:39.821 --rc genhtml_legend=1 00:30:39.821 --rc geninfo_all_blocks=1 00:30:39.821 --rc geninfo_unexecuted_blocks=1 00:30:39.821 00:30:39.821 ' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.821 17:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:47.975 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:47.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:47.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:47.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:47.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.976 17:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:47.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:30:47.976 00:30:47.976 --- 10.0.0.2 ping statistics --- 00:30:47.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.976 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:47.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:30:47.976 00:30:47.976 --- 10.0.0.1 ping statistics --- 00:30:47.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.976 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2165416 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2165416 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2165416 ']' 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.976 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.977 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.977 17:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.977 [2024-11-20 17:13:39.341293] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:47.977 [2024-11-20 17:13:39.342442] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:30:47.977 [2024-11-20 17:13:39.342489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.977 [2024-11-20 17:13:39.447926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:47.977 [2024-11-20 17:13:39.499614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.977 [2024-11-20 17:13:39.499672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.977 [2024-11-20 17:13:39.499680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.977 [2024-11-20 17:13:39.499688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.977 [2024-11-20 17:13:39.499694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.977 [2024-11-20 17:13:39.501530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.977 [2024-11-20 17:13:39.501690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.977 [2024-11-20 17:13:39.501691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.977 [2024-11-20 17:13:39.579923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:47.977 [2024-11-20 17:13:39.580963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:47.977 [2024-11-20 17:13:39.581439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:47.977 [2024-11-20 17:13:39.581568] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:48.239 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:48.239 [2024-11-20 17:13:40.390639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.502 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:48.502 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.763 [2024-11-20 17:13:40.807356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.763 17:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:49.025 17:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:49.287 Malloc0 00:30:49.287 17:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:49.287 Delay0 00:30:49.287 17:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.549 17:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:49.810 NULL1 00:30:49.810 17:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:50.072 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2166108 00:30:50.072 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:50.072 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:50.072 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.072 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.334 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:50.334 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:50.596 true 00:30:50.596 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:50.596 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.858 17:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.120 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:51.120 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:51.120 true 00:30:51.120 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:51.120 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.381 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.644 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:51.644 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:51.644 true 00:30:51.904 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:51.904 17:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.904 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.165 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:52.165 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:52.426 true 00:30:52.426 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:52.426 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.686 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.686 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:52.686 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:52.947 true 00:30:52.948 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:52.948 17:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.209 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.209 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:53.209 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:53.470 true 00:30:53.470 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:53.470 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.730 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.991 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:53.991 17:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:53.991 true 00:30:53.991 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:53.991 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.251 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.512 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:54.512 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:54.512 true 00:30:54.512 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:54.512 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.773 17:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.033 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:55.033 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:55.033 true 00:30:55.294 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:55.294 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.294 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.555 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:55.555 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:55.816 true 00:30:55.816 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:55.816 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.816 17:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.084 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:56.084 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:56.346 true 00:30:56.346 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:56.346 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.607 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.607 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:56.607 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:56.869 true 00:30:56.869 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:56.869 17:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.130 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.130 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:57.130 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:57.390 true 00:30:57.390 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:57.391 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.651 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.912 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:57.912 17:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:57.912 true 00:30:57.912 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:57.912 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.173 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.434 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:58.434 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:58.434 true 00:30:58.434 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:58.434 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.695 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.956 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:58.956 17:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:58.956 true 00:30:58.956 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:58.956 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.216 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.476 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:59.476 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:59.736 true 00:30:59.736 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:30:59.736 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.736 17:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.998 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:59.998 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:00.258 true 00:31:00.258 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:00.258 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.258 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.519 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:00.519 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:00.780 true 00:31:00.780 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:00.780 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.040 17:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.040 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:01.040 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:01.301 true 00:31:01.301 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:01.301 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.563 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.563 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:01.563 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:01.823 true 00:31:01.823 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:01.823 17:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.084 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.345 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:02.345 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:02.345 true 00:31:02.345 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:02.345 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.605 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.866 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:02.866 17:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:02.866 true 00:31:02.866 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:02.866 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.125 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.384 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:03.384 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:03.384 true 00:31:03.644 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:03.644 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.644 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.904 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:03.904 17:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:04.164 true 00:31:04.164 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:04.164 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.164 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.425 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:04.425 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:04.686 true 00:31:04.686 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:04.686 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.947 17:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.947 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:04.947 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:05.208 true 00:31:05.208 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:05.208 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.473 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.473 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:05.473 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:05.737 true 00:31:05.737 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:05.737 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.071 17:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.071 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:06.071 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:06.373 true 00:31:06.373 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:06.373 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.640 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.640 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:06.640 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:06.900 true 00:31:06.900 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:06.900 17:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.161 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.161 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:07.161 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:07.423 true 00:31:07.423 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:07.423 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.683 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.946 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:07.946 17:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:07.946 true 00:31:07.946 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:07.946 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.207 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.468 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:08.468 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:08.468 true 00:31:08.468 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:08.468 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.728 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.000 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:09.001 17:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:09.001 true 00:31:09.001 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:09.001 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.264 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.525 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:09.525 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:09.525 true 00:31:09.785 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:09.785 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.785 17:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.045 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:10.046 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:10.307 true 00:31:10.307 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:10.307 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.307 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.568 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:10.568 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:10.829 true 00:31:10.829 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:10.829 17:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.091 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.091 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:11.091 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:11.353 true 00:31:11.353 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:11.353 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.614 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.614 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:11.614 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:11.874 true 00:31:11.874 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:11.874 17:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.134 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.394 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:12.394 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:12.394 true 00:31:12.394 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:12.394 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.656 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.917 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:12.917 17:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:12.917 true 00:31:12.917 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:12.917 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.178 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.439 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:13.439 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:13.700 true 00:31:13.700 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:13.700 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.700 17:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.960 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:13.960 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:14.220 true 00:31:14.220 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:14.220 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.220 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.480 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:14.480 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:14.741 true 00:31:14.741 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:14.741 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.001 17:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.001 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:15.001 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:15.262 true 00:31:15.262 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:15.262 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.523 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.523 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:15.523 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:15.783 true 00:31:15.783 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:15.783 17:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.044 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.304 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:16.304 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:16.304 true 00:31:16.304 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:16.304 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.566 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.826 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:16.826 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:16.826 true 00:31:16.826 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:16.826 17:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.086 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.346 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:17.346 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:17.606 true 00:31:17.606 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:17.606 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.606 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.867 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:17.867 17:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:18.128 true 00:31:18.128 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:18.128 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.128 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.388 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:18.388 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:18.649 true 00:31:18.649 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:18.649 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.649 17:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.909 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:18.909 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:19.170 true 00:31:19.170 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:19.170 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.431 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.431 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:19.431 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:19.691 true 00:31:19.691 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:19.691 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.953 17:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.953 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:19.953 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:20.213 true 00:31:20.213 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:20.213 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.213 Initializing NVMe Controllers 00:31:20.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.213 Controller IO queue size 128, less than required. 00:31:20.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:20.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:20.213 Initialization complete. Launching workers. 00:31:20.213 ======================================================== 00:31:20.213 Latency(us) 00:31:20.213 Device Information : IOPS MiB/s Average min max 00:31:20.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30258.19 14.77 4230.11 1119.27 10936.76 00:31:20.213 ======================================================== 00:31:20.213 Total : 30258.19 14.77 4230.11 1119.27 10936.76 00:31:20.213 00:31:20.474 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.734 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:20.734 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:20.734 true 00:31:20.734 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2166108 00:31:20.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2166108) - No such process 00:31:20.734 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2166108 00:31:20.734 17:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.994 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:21.255 null0 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:21.255 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:21.516 null1 00:31:21.516 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:21.516 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:21.516 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:21.777 null2 00:31:21.777 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:21.777 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:21.777 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:21.777 null3 00:31:21.777 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:21.777 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:21.777 17:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:22.037 null4 00:31:22.038 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.038 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.038 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:22.298 null5 00:31:22.298 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.298 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.298 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:22.298 null6 00:31:22.298 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.298 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.298 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:22.560 null7 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2172291 2172292 2172295 2172296 2172298 2172300 2172302 2172304 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.560 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.821 17:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.082 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.344 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.345 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.606 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.866 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.866 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.866 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.866 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.126 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.127 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.127 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.127 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.387 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.647 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.908 17:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.908 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.169 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:25.170 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.431 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:25.691 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.692 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.952 17:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:25.952 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.952 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:25.952 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.952 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:25.952 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.211 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.211 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.212 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.472 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.732 rmmod nvme_tcp 00:31:26.732 rmmod nvme_fabrics 00:31:26.732 rmmod nvme_keyring 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2165416 ']' 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2165416 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2165416 ']' 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2165416 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2165416 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2165416' 00:31:26.732 killing process with pid 2165416 00:31:26.732 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2165416 00:31:26.733 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2165416 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.994 17:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.908 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:28.908 00:31:28.908 real 0m49.513s 00:31:28.908 user 3m5.136s 00:31:28.908 sys 0m22.806s 00:31:28.908 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.908 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:28.908 ************************************ 00:31:28.908 END TEST nvmf_ns_hotplug_stress 00:31:28.908 ************************************ 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:29.170 ************************************ 00:31:29.170 START TEST nvmf_delete_subsystem 00:31:29.170 ************************************ 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:29.170 * Looking for test storage... 00:31:29.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.170 --rc genhtml_branch_coverage=1 00:31:29.170 --rc genhtml_function_coverage=1 00:31:29.170 --rc genhtml_legend=1 00:31:29.170 --rc geninfo_all_blocks=1 00:31:29.170 --rc geninfo_unexecuted_blocks=1 00:31:29.170 00:31:29.170 ' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.170 --rc genhtml_branch_coverage=1 00:31:29.170 --rc genhtml_function_coverage=1 00:31:29.170 --rc genhtml_legend=1 00:31:29.170 --rc geninfo_all_blocks=1 00:31:29.170 --rc geninfo_unexecuted_blocks=1 00:31:29.170 00:31:29.170 ' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.170 --rc genhtml_branch_coverage=1 00:31:29.170 --rc genhtml_function_coverage=1 00:31:29.170 --rc genhtml_legend=1 00:31:29.170 --rc geninfo_all_blocks=1 00:31:29.170 --rc geninfo_unexecuted_blocks=1 00:31:29.170 00:31:29.170 ' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.170 --rc genhtml_branch_coverage=1 00:31:29.170 --rc genhtml_function_coverage=1 00:31:29.170 --rc genhtml_legend=1 00:31:29.170 --rc geninfo_all_blocks=1 00:31:29.170 --rc geninfo_unexecuted_blocks=1 00:31:29.170 00:31:29.170 ' 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.170 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.433 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.434 17:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.580 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:37.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:37.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:37.581 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:37.581 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:37.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:31:37.581 00:31:37.581 --- 10.0.0.2 ping statistics --- 00:31:37.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.581 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:31:37.581 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:31:37.582 00:31:37.582 --- 10.0.0.1 ping statistics --- 00:31:37.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.582 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2177450 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2177450 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2177450 ']' 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 [2024-11-20 17:14:28.746010] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:37.582 [2024-11-20 17:14:28.747149] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:31:37.582 [2024-11-20 17:14:28.747205] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.582 [2024-11-20 17:14:28.822484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:37.582 [2024-11-20 17:14:28.867436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.582 [2024-11-20 17:14:28.867487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.582 [2024-11-20 17:14:28.867493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.582 [2024-11-20 17:14:28.867501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.582 [2024-11-20 17:14:28.867506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.582 [2024-11-20 17:14:28.869004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.582 [2024-11-20 17:14:28.869004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.582 [2024-11-20 17:14:28.941391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:37.582 [2024-11-20 17:14:28.941718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:37.582 [2024-11-20 17:14:28.942110] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:37.582 17:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 [2024-11-20 17:14:29.025823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 [2024-11-20 17:14:29.058228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 NULL1 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 Delay0 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.582 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.583 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.583 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:37.583 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.583 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2177479 00:31:37.583 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:37.583 17:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:37.583 [2024-11-20 17:14:29.162293] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:38.988 17:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.988 17:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.988 17:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 Write completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.252 starting I/O failed: -6 00:31:39.252 [2024-11-20 17:14:31.292528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390680 is same with the state(6) to be set 00:31:39.252 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 [2024-11-20 17:14:31.293143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23902c0 is same with the state(6) to be set 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 starting I/O failed: -6 00:31:39.253 [2024-11-20 17:14:31.295315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efca400d490 is same with the state(6) to be set 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:39.253 Write completed with error (sct=0, sc=8) 00:31:39.253 Read completed with error (sct=0, sc=8) 00:31:40.199 [2024-11-20 17:14:32.262575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23919a0 is same with the state(6) to be set 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 [2024-11-20 17:14:32.296307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23904a0 is same with the state(6) to be set 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 [2024-11-20 17:14:32.296464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390860 is same with the state(6) to be set 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 [2024-11-20 17:14:32.297637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efca400d020 is same with the state(6) to be set 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.199 Write completed with error (sct=0, sc=8) 00:31:40.199 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Read completed with error (sct=0, sc=8) 00:31:40.200 Write completed with error (sct=0, sc=8) 00:31:40.200 [2024-11-20 17:14:32.297844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efca400d7c0 is same with the state(6) to be set 00:31:40.200 Initializing NVMe Controllers 00:31:40.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.200 Controller IO queue size 128, less than required. 00:31:40.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:40.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:40.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:40.200 Initialization complete. Launching workers. 00:31:40.200 ======================================================== 00:31:40.200 Latency(us) 00:31:40.200 Device Information : IOPS MiB/s Average min max 00:31:40.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.41 0.08 898783.67 633.40 1006226.05 00:31:40.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.48 0.07 1111344.94 253.29 2002373.28 00:31:40.200 ======================================================== 00:31:40.200 Total : 315.89 0.15 998694.17 253.29 2002373.28 00:31:40.200 00:31:40.200 [2024-11-20 17:14:32.298740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23919a0 (9): Bad file descriptor 00:31:40.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:40.200 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.200 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:40.200 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2177479 00:31:40.200 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2177479 00:31:40.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2177479) - No such process 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2177479 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2177479 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2177479 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:40.776 [2024-11-20 17:14:32.834076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2178146 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:40.776 17:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:40.776 [2024-11-20 17:14:32.931994] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:41.347 17:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:41.347 17:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:41.347 17:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:41.918 17:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:41.918 17:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:41.918 17:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:42.488 17:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:42.488 17:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:42.488 17:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:42.751 17:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:42.751 17:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:42.751 17:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.322 17:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.322 17:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:43.322 17:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.894 17:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.894 17:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:43.894 17:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.894 Initializing NVMe Controllers 00:31:43.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.894 Controller IO queue size 128, less than required. 00:31:43.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:43.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:43.894 Initialization complete. Launching workers. 00:31:43.894 ======================================================== 00:31:43.894 Latency(us) 00:31:43.894 Device Information : IOPS MiB/s Average min max 00:31:43.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002468.10 1000253.57 1007845.79 00:31:43.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003069.10 1000152.74 1009815.65 00:31:43.894 ======================================================== 00:31:43.894 Total : 256.00 0.12 1002768.60 1000152.74 1009815.65 00:31:43.894 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178146 00:31:44.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2178146) - No such process 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2178146 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.465 rmmod nvme_tcp 00:31:44.465 rmmod nvme_fabrics 00:31:44.465 rmmod nvme_keyring 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2177450 ']' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2177450 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2177450 ']' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2177450 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177450 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177450' 00:31:44.465 killing process with pid 2177450 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2177450 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2177450 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.465 17:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.008 00:31:47.008 real 0m17.570s 00:31:47.008 user 0m26.068s 00:31:47.008 sys 0m7.495s 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.008 ************************************ 00:31:47.008 END TEST nvmf_delete_subsystem 00:31:47.008 ************************************ 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.008 ************************************ 00:31:47.008 START TEST nvmf_host_management 00:31:47.008 ************************************ 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:47.008 * Looking for test storage... 00:31:47.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.008 --rc genhtml_branch_coverage=1 00:31:47.008 --rc genhtml_function_coverage=1 00:31:47.008 --rc genhtml_legend=1 00:31:47.008 --rc geninfo_all_blocks=1 00:31:47.008 --rc geninfo_unexecuted_blocks=1 00:31:47.008 00:31:47.008 ' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.008 --rc genhtml_branch_coverage=1 00:31:47.008 --rc genhtml_function_coverage=1 00:31:47.008 --rc genhtml_legend=1 00:31:47.008 --rc geninfo_all_blocks=1 00:31:47.008 --rc geninfo_unexecuted_blocks=1 00:31:47.008 00:31:47.008 ' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.008 --rc genhtml_branch_coverage=1 00:31:47.008 --rc genhtml_function_coverage=1 00:31:47.008 --rc genhtml_legend=1 00:31:47.008 --rc geninfo_all_blocks=1 00:31:47.008 --rc geninfo_unexecuted_blocks=1 00:31:47.008 00:31:47.008 ' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.008 --rc genhtml_branch_coverage=1 00:31:47.008 --rc genhtml_function_coverage=1 00:31:47.008 --rc genhtml_legend=1 00:31:47.008 --rc geninfo_all_blocks=1 00:31:47.008 --rc geninfo_unexecuted_blocks=1 00:31:47.008 00:31:47.008 ' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.008 17:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.009 17:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:55.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:55.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:55.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:55.145 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.145 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:31:55.146 00:31:55.146 --- 10.0.0.2 ping statistics --- 00:31:55.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.146 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:31:55.146 00:31:55.146 --- 10.0.0.1 ping statistics --- 00:31:55.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.146 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2183110 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2183110 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2183110 ']' 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.146 17:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.146 [2024-11-20 17:14:46.563831] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.146 [2024-11-20 17:14:46.564984] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:31:55.146 [2024-11-20 17:14:46.565035] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.146 [2024-11-20 17:14:46.666343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:55.146 [2024-11-20 17:14:46.719698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.146 [2024-11-20 17:14:46.719754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.146 [2024-11-20 17:14:46.719763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.146 [2024-11-20 17:14:46.719770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.146 [2024-11-20 17:14:46.719776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.146 [2024-11-20 17:14:46.721817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.146 [2024-11-20 17:14:46.721979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.146 [2024-11-20 17:14:46.722141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:55.146 [2024-11-20 17:14:46.722142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.146 [2024-11-20 17:14:46.800927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.146 [2024-11-20 17:14:46.801914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.146 [2024-11-20 17:14:46.802194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:55.146 [2024-11-20 17:14:46.802563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.146 [2024-11-20 17:14:46.802614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.407 [2024-11-20 17:14:47.443329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.407 Malloc0 00:31:55.407 [2024-11-20 17:14:47.543684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.407 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.676 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2183202 00:31:55.676 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2183202 /var/tmp/bdevperf.sock 00:31:55.676 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2183202 ']' 00:31:55.676 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:55.676 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:55.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:55.677 { 00:31:55.677 "params": { 00:31:55.677 "name": "Nvme$subsystem", 00:31:55.677 "trtype": "$TEST_TRANSPORT", 00:31:55.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:55.677 "adrfam": "ipv4", 00:31:55.677 "trsvcid": "$NVMF_PORT", 00:31:55.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:55.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:55.677 "hdgst": ${hdgst:-false}, 00:31:55.677 "ddgst": ${ddgst:-false} 00:31:55.677 }, 00:31:55.677 "method": "bdev_nvme_attach_controller" 00:31:55.677 } 00:31:55.677 EOF 00:31:55.677 )") 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:55.677 17:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:55.677 "params": { 00:31:55.677 "name": "Nvme0", 00:31:55.677 "trtype": "tcp", 00:31:55.677 "traddr": "10.0.0.2", 00:31:55.677 "adrfam": "ipv4", 00:31:55.677 "trsvcid": "4420", 00:31:55.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:55.677 "hdgst": false, 00:31:55.677 "ddgst": false 00:31:55.677 }, 00:31:55.677 "method": "bdev_nvme_attach_controller" 00:31:55.677 }' 00:31:55.677 [2024-11-20 17:14:47.655181] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:31:55.677 [2024-11-20 17:14:47.655251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183202 ] 00:31:55.677 [2024-11-20 17:14:47.750999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.677 [2024-11-20 17:14:47.804490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.001 Running I/O for 10 seconds... 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 [2024-11-20 17:14:48.555016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14f20 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.617 [2024-11-20 17:14:48.555850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.555862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.617 [2024-11-20 17:14:48.555871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.555879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.617 [2024-11-20 17:14:48.555888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.555897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.617 [2024-11-20 17:14:48.555905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.555913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61d000 is same with the state(6) to be set 00:31:56.617 [2024-11-20 17:14:48.555979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.617 [2024-11-20 17:14:48.555990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.556007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.617 [2024-11-20 17:14:48.556016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.556026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.617 [2024-11-20 17:14:48.556034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.556043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.617 [2024-11-20 17:14:48.556051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.556061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.617 [2024-11-20 17:14:48.556077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.617 [2024-11-20 17:14:48.556087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.617 [2024-11-20 17:14:48.556094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.618 [2024-11-20 17:14:48.556785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.618 [2024-11-20 17:14:48.556794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.556988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.556996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.557126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.619 [2024-11-20 17:14:48.557134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.558415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:56.619 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.619 task offset: 107264 on job bdev=Nvme0n1 fails 00:31:56.619 00:31:56.619 Latency(us) 00:31:56.619 [2024-11-20T16:14:48.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.619 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:56.619 Job: Nvme0n1 ended in about 0.59 seconds with error 00:31:56.619 Verification LBA range: start 0x0 length 0x400 00:31:56.619 Nvme0n1 : 0.59 1419.53 88.72 109.19 0.00 40856.95 2061.65 39321.60 00:31:56.619 [2024-11-20T16:14:48.795Z] =================================================================================================================== 00:31:56.619 [2024-11-20T16:14:48.795Z] Total : 1419.53 88.72 109.19 0.00 40856.95 2061.65 39321.60 00:31:56.619 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:56.619 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.619 [2024-11-20 17:14:48.560911] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:56.619 [2024-11-20 17:14:48.560952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61d000 (9): Bad file descriptor 00:31:56.619 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.619 [2024-11-20 17:14:48.562566] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:56.619 [2024-11-20 17:14:48.562667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:56.619 [2024-11-20 17:14:48.562695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.619 [2024-11-20 17:14:48.562714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:56.619 [2024-11-20 17:14:48.562724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:56.619 [2024-11-20 17:14:48.562731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.619 [2024-11-20 17:14:48.562739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61d000 00:31:56.619 [2024-11-20 17:14:48.562761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61d000 (9): Bad file descriptor 00:31:56.619 [2024-11-20 17:14:48.562777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:56.619 [2024-11-20 17:14:48.562785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:56.619 [2024-11-20 17:14:48.562796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:56.619 [2024-11-20 17:14:48.562807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:56.619 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.619 17:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2183202 00:31:57.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2183202) - No such process 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.565 { 00:31:57.565 "params": { 00:31:57.565 "name": "Nvme$subsystem", 00:31:57.565 "trtype": "$TEST_TRANSPORT", 00:31:57.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.565 "adrfam": "ipv4", 00:31:57.565 "trsvcid": "$NVMF_PORT", 00:31:57.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.565 "hdgst": ${hdgst:-false}, 00:31:57.565 "ddgst": ${ddgst:-false} 00:31:57.565 }, 00:31:57.565 "method": "bdev_nvme_attach_controller" 00:31:57.565 } 00:31:57.565 EOF 00:31:57.565 )") 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:57.565 17:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:57.565 "params": { 00:31:57.565 "name": "Nvme0", 00:31:57.565 "trtype": "tcp", 00:31:57.565 "traddr": "10.0.0.2", 00:31:57.565 "adrfam": "ipv4", 00:31:57.565 "trsvcid": "4420", 00:31:57.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.565 "hdgst": false, 00:31:57.565 "ddgst": false 00:31:57.565 }, 00:31:57.565 "method": "bdev_nvme_attach_controller" 00:31:57.565 }' 00:31:57.565 [2024-11-20 17:14:49.644846] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:31:57.565 [2024-11-20 17:14:49.644923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183591 ] 00:31:57.565 [2024-11-20 17:14:49.738058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.826 [2024-11-20 17:14:49.775645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.826 Running I/O for 1 seconds... 00:31:59.211 1564.00 IOPS, 97.75 MiB/s 00:31:59.211 Latency(us) 00:31:59.211 [2024-11-20T16:14:51.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.211 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:59.211 Verification LBA range: start 0x0 length 0x400 00:31:59.211 Nvme0n1 : 1.02 1603.43 100.21 0.00 0.00 38874.15 1686.19 32986.45 00:31:59.211 [2024-11-20T16:14:51.387Z] =================================================================================================================== 00:31:59.211 [2024-11-20T16:14:51.387Z] Total : 1603.43 100.21 0.00 0.00 38874.15 1686.19 32986.45 00:31:59.211 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.212 rmmod nvme_tcp 00:31:59.212 rmmod nvme_fabrics 00:31:59.212 rmmod nvme_keyring 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2183110 ']' 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2183110 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2183110 ']' 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2183110 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183110 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183110' 00:31:59.212 killing process with pid 2183110 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2183110 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2183110 00:31:59.212 [2024-11-20 17:14:51.358018] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.212 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.474 17:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:01.389 00:32:01.389 real 0m14.694s 00:32:01.389 user 0m19.279s 00:32:01.389 sys 0m7.540s 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.389 ************************************ 00:32:01.389 END TEST nvmf_host_management 00:32:01.389 ************************************ 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:01.389 ************************************ 00:32:01.389 START TEST nvmf_lvol 00:32:01.389 ************************************ 00:32:01.389 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:01.651 * Looking for test storage... 00:32:01.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.651 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.652 --rc genhtml_branch_coverage=1 00:32:01.652 --rc genhtml_function_coverage=1 00:32:01.652 --rc genhtml_legend=1 00:32:01.652 --rc geninfo_all_blocks=1 00:32:01.652 --rc geninfo_unexecuted_blocks=1 00:32:01.652 00:32:01.652 ' 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.652 --rc genhtml_branch_coverage=1 00:32:01.652 --rc genhtml_function_coverage=1 00:32:01.652 --rc genhtml_legend=1 00:32:01.652 --rc geninfo_all_blocks=1 00:32:01.652 --rc geninfo_unexecuted_blocks=1 00:32:01.652 00:32:01.652 ' 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.652 --rc genhtml_branch_coverage=1 00:32:01.652 --rc genhtml_function_coverage=1 00:32:01.652 --rc genhtml_legend=1 00:32:01.652 --rc geninfo_all_blocks=1 00:32:01.652 --rc geninfo_unexecuted_blocks=1 00:32:01.652 00:32:01.652 ' 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.652 --rc genhtml_branch_coverage=1 00:32:01.652 --rc genhtml_function_coverage=1 00:32:01.652 --rc genhtml_legend=1 00:32:01.652 --rc geninfo_all_blocks=1 00:32:01.652 --rc geninfo_unexecuted_blocks=1 00:32:01.652 00:32:01.652 ' 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.652 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.653 17:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.799 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:09.800 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:09.800 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:09.800 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:09.800 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.800 17:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:32:09.800 00:32:09.800 --- 10.0.0.2 ping statistics --- 00:32:09.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.800 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:32:09.800 00:32:09.800 --- 10.0.0.1 ping statistics --- 00:32:09.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.800 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:09.800 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2188281 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2188281 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2188281 ']' 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.801 17:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.801 [2024-11-20 17:15:01.352227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.801 [2024-11-20 17:15:01.353365] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:32:09.801 [2024-11-20 17:15:01.353418] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.801 [2024-11-20 17:15:01.453105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.801 [2024-11-20 17:15:01.506549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.801 [2024-11-20 17:15:01.506604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.801 [2024-11-20 17:15:01.506613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.801 [2024-11-20 17:15:01.506620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.801 [2024-11-20 17:15:01.506628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.801 [2024-11-20 17:15:01.508478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.801 [2024-11-20 17:15:01.508638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.801 [2024-11-20 17:15:01.508638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.801 [2024-11-20 17:15:01.587495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.801 [2024-11-20 17:15:01.588470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:09.801 [2024-11-20 17:15:01.588884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.801 [2024-11-20 17:15:01.589037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.063 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:10.323 [2024-11-20 17:15:02.377689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.323 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.584 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:10.584 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.844 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:10.844 17:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:11.104 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:11.104 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=db4a7f1f-175f-41fd-9f8c-8ee7f45c4056 00:32:11.104 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db4a7f1f-175f-41fd-9f8c-8ee7f45c4056 lvol 20 00:32:11.365 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bcdd1bd0-bc40-49a8-bc14-3f5c9497faf3 00:32:11.365 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:11.625 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcdd1bd0-bc40-49a8-bc14-3f5c9497faf3 00:32:11.625 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.886 [2024-11-20 17:15:03.929642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.886 17:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:12.147 17:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2188714 00:32:12.147 17:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:12.147 17:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:13.090 17:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bcdd1bd0-bc40-49a8-bc14-3f5c9497faf3 MY_SNAPSHOT 00:32:13.353 17:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6abae50f-d391-4a79-b289-7077c3c5af44 00:32:13.353 17:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bcdd1bd0-bc40-49a8-bc14-3f5c9497faf3 30 00:32:13.616 17:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6abae50f-d391-4a79-b289-7077c3c5af44 MY_CLONE 00:32:13.877 17:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=25e604f7-c65b-4f95-b7bb-29a8f9c1472c 00:32:13.877 17:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 25e604f7-c65b-4f95-b7bb-29a8f9c1472c 00:32:14.137 17:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2188714 00:32:24.137 Initializing NVMe Controllers 00:32:24.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:24.137 Controller IO queue size 128, less than required. 00:32:24.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:24.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:24.137 Initialization complete. Launching workers. 00:32:24.137 ======================================================== 00:32:24.137 Latency(us) 00:32:24.137 Device Information : IOPS MiB/s Average min max 00:32:24.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15573.85 60.84 8222.11 1445.97 77979.71 00:32:24.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15207.47 59.40 8418.49 3937.32 82171.83 00:32:24.137 ======================================================== 00:32:24.137 Total : 30781.32 120.24 8319.13 1445.97 82171.83 00:32:24.137 00:32:24.137 17:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.137 17:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcdd1bd0-bc40-49a8-bc14-3f5c9497faf3 00:32:24.137 17:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db4a7f1f-175f-41fd-9f8c-8ee7f45c4056 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.137 rmmod nvme_tcp 00:32:24.137 rmmod nvme_fabrics 00:32:24.137 rmmod nvme_keyring 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2188281 ']' 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2188281 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2188281 ']' 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2188281 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2188281 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2188281' 00:32:24.137 killing process with pid 2188281 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2188281 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2188281 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.137 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.138 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.138 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.138 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.138 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.138 17:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.520 00:32:25.520 real 0m23.934s 00:32:25.520 user 0m56.268s 00:32:25.520 sys 0m10.674s 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 ************************************ 00:32:25.520 END TEST nvmf_lvol 00:32:25.520 ************************************ 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 ************************************ 00:32:25.520 START TEST nvmf_lvs_grow 00:32:25.520 ************************************ 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:25.520 * Looking for test storage... 00:32:25.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.520 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:25.781 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.782 --rc genhtml_branch_coverage=1 00:32:25.782 --rc genhtml_function_coverage=1 00:32:25.782 --rc genhtml_legend=1 00:32:25.782 --rc geninfo_all_blocks=1 00:32:25.782 --rc geninfo_unexecuted_blocks=1 00:32:25.782 00:32:25.782 ' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.782 --rc genhtml_branch_coverage=1 00:32:25.782 --rc genhtml_function_coverage=1 00:32:25.782 --rc genhtml_legend=1 00:32:25.782 --rc geninfo_all_blocks=1 00:32:25.782 --rc geninfo_unexecuted_blocks=1 00:32:25.782 00:32:25.782 ' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.782 --rc genhtml_branch_coverage=1 00:32:25.782 --rc genhtml_function_coverage=1 00:32:25.782 --rc genhtml_legend=1 00:32:25.782 --rc geninfo_all_blocks=1 00:32:25.782 --rc geninfo_unexecuted_blocks=1 00:32:25.782 00:32:25.782 ' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.782 --rc genhtml_branch_coverage=1 00:32:25.782 --rc genhtml_function_coverage=1 00:32:25.782 --rc genhtml_legend=1 00:32:25.782 --rc geninfo_all_blocks=1 00:32:25.782 --rc geninfo_unexecuted_blocks=1 00:32:25.782 00:32:25.782 ' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.782 17:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:33.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:33.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:33.925 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.925 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:33.926 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.926 17:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:32:33.926 00:32:33.926 --- 10.0.0.2 ping statistics --- 00:32:33.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.926 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:33.926 00:32:33.926 --- 10.0.0.1 ping statistics --- 00:32:33.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.926 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2195494 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2195494 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2195494 ']' 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.926 17:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:33.926 [2024-11-20 17:15:25.253926] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:33.926 [2024-11-20 17:15:25.254893] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:32:33.926 [2024-11-20 17:15:25.254929] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.926 [2024-11-20 17:15:25.349882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.926 [2024-11-20 17:15:25.385139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.926 [2024-11-20 17:15:25.385175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.926 [2024-11-20 17:15:25.385183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.926 [2024-11-20 17:15:25.385190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.926 [2024-11-20 17:15:25.385196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.926 [2024-11-20 17:15:25.385750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.926 [2024-11-20 17:15:25.441593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:33.926 [2024-11-20 17:15:25.441861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.926 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:34.187 [2024-11-20 17:15:26.222540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.187 ************************************ 00:32:34.187 START TEST lvs_grow_clean 00:32:34.187 ************************************ 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.187 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:34.448 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:34.448 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:34.708 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:34.708 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:34.708 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:34.968 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:34.968 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:34.968 17:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec lvol 150 00:32:34.968 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=61a8b2ef-066b-4d91-a250-54027d858c00 00:32:34.968 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.968 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:35.227 [2024-11-20 17:15:27.206264] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:35.227 [2024-11-20 17:15:27.206419] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:35.227 true 00:32:35.227 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:35.227 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:35.486 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:35.486 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:35.486 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61a8b2ef-066b-4d91-a250-54027d858c00 00:32:35.745 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.745 [2024-11-20 17:15:27.898795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.745 17:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2196060 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2196060 /var/tmp/bdevperf.sock 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2196060 ']' 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.005 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.005 [2024-11-20 17:15:28.131628] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:32:36.005 [2024-11-20 17:15:28.131684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196060 ] 00:32:36.265 [2024-11-20 17:15:28.217922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.265 [2024-11-20 17:15:28.254849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.836 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.836 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:36.836 17:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:37.408 Nvme0n1 00:32:37.408 17:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:37.408 [ 00:32:37.408 { 00:32:37.408 "name": "Nvme0n1", 00:32:37.408 "aliases": [ 00:32:37.408 "61a8b2ef-066b-4d91-a250-54027d858c00" 00:32:37.408 ], 00:32:37.408 "product_name": "NVMe disk", 00:32:37.408 "block_size": 4096, 00:32:37.408 "num_blocks": 38912, 00:32:37.408 "uuid": "61a8b2ef-066b-4d91-a250-54027d858c00", 00:32:37.408 "numa_id": 0, 00:32:37.408 "assigned_rate_limits": { 00:32:37.408 "rw_ios_per_sec": 0, 00:32:37.408 "rw_mbytes_per_sec": 0, 00:32:37.408 "r_mbytes_per_sec": 0, 00:32:37.408 "w_mbytes_per_sec": 0 00:32:37.408 }, 00:32:37.408 "claimed": false, 00:32:37.408 "zoned": false, 00:32:37.408 "supported_io_types": { 00:32:37.408 "read": true, 00:32:37.408 "write": true, 00:32:37.408 "unmap": true, 00:32:37.408 "flush": true, 00:32:37.408 "reset": true, 00:32:37.408 "nvme_admin": true, 00:32:37.408 "nvme_io": true, 00:32:37.408 "nvme_io_md": false, 00:32:37.408 "write_zeroes": true, 00:32:37.408 "zcopy": false, 00:32:37.408 "get_zone_info": false, 00:32:37.408 "zone_management": false, 00:32:37.408 "zone_append": false, 00:32:37.408 "compare": true, 00:32:37.408 "compare_and_write": true, 00:32:37.408 "abort": true, 00:32:37.408 "seek_hole": false, 00:32:37.408 "seek_data": false, 00:32:37.408 "copy": true, 00:32:37.408 "nvme_iov_md": false 00:32:37.408 }, 00:32:37.408 "memory_domains": [ 00:32:37.408 { 00:32:37.408 "dma_device_id": "system", 00:32:37.408 "dma_device_type": 1 00:32:37.408 } 00:32:37.408 ], 00:32:37.408 "driver_specific": { 00:32:37.408 "nvme": [ 00:32:37.409 { 00:32:37.409 "trid": { 00:32:37.409 "trtype": "TCP", 00:32:37.409 "adrfam": "IPv4", 00:32:37.409 "traddr": "10.0.0.2", 00:32:37.409 "trsvcid": "4420", 00:32:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:37.409 }, 00:32:37.409 "ctrlr_data": { 00:32:37.409 "cntlid": 1, 00:32:37.409 "vendor_id": "0x8086", 00:32:37.409 "model_number": "SPDK bdev Controller", 00:32:37.409 "serial_number": "SPDK0", 00:32:37.409 "firmware_revision": "25.01", 00:32:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.409 "oacs": { 00:32:37.409 "security": 0, 00:32:37.409 "format": 0, 00:32:37.409 "firmware": 0, 00:32:37.409 "ns_manage": 0 00:32:37.409 }, 00:32:37.409 "multi_ctrlr": true, 00:32:37.409 "ana_reporting": false 00:32:37.409 }, 00:32:37.409 "vs": { 00:32:37.409 "nvme_version": "1.3" 00:32:37.409 }, 00:32:37.409 "ns_data": { 00:32:37.409 "id": 1, 00:32:37.409 "can_share": true 00:32:37.409 } 00:32:37.409 } 00:32:37.409 ], 00:32:37.409 "mp_policy": "active_passive" 00:32:37.409 } 00:32:37.409 } 00:32:37.409 ] 00:32:37.409 17:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2196233 00:32:37.409 17:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:37.409 17:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:37.409 Running I/O for 10 seconds... 00:32:38.793 Latency(us) 00:32:38.793 [2024-11-20T16:15:30.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.793 Nvme0n1 : 1.00 17316.00 67.64 0.00 0.00 0.00 0.00 0.00 00:32:38.793 [2024-11-20T16:15:30.969Z] =================================================================================================================== 00:32:38.793 [2024-11-20T16:15:30.969Z] Total : 17316.00 67.64 0.00 0.00 0.00 0.00 0.00 00:32:38.793 00:32:39.365 17:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:39.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.627 Nvme0n1 : 2.00 17389.50 67.93 0.00 0.00 0.00 0.00 0.00 00:32:39.627 [2024-11-20T16:15:31.803Z] =================================================================================================================== 00:32:39.627 [2024-11-20T16:15:31.803Z] Total : 17389.50 67.93 0.00 0.00 0.00 0.00 0.00 00:32:39.627 00:32:39.627 true 00:32:39.627 17:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:39.627 17:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:39.887 17:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:39.887 17:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:39.887 17:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2196233 00:32:40.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.457 Nvme0n1 : 3.00 17477.33 68.27 0.00 0.00 0.00 0.00 0.00 00:32:40.457 [2024-11-20T16:15:32.633Z] =================================================================================================================== 00:32:40.457 [2024-11-20T16:15:32.633Z] Total : 17477.33 68.27 0.00 0.00 0.00 0.00 0.00 00:32:40.457 00:32:41.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.840 Nvme0n1 : 4.00 17616.50 68.81 0.00 0.00 0.00 0.00 0.00 00:32:41.840 [2024-11-20T16:15:34.016Z] =================================================================================================================== 00:32:41.840 [2024-11-20T16:15:34.016Z] Total : 17616.50 68.81 0.00 0.00 0.00 0.00 0.00 00:32:41.840 00:32:42.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.781 Nvme0n1 : 5.00 19097.00 74.60 0.00 0.00 0.00 0.00 0.00 00:32:42.781 [2024-11-20T16:15:34.957Z] =================================================================================================================== 00:32:42.781 [2024-11-20T16:15:34.957Z] Total : 19097.00 74.60 0.00 0.00 0.00 0.00 0.00 00:32:42.781 00:32:43.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.723 Nvme0n1 : 6.00 20147.50 78.70 0.00 0.00 0.00 0.00 0.00 00:32:43.723 [2024-11-20T16:15:35.899Z] =================================================================================================================== 00:32:43.723 [2024-11-20T16:15:35.899Z] Total : 20147.50 78.70 0.00 0.00 0.00 0.00 0.00 00:32:43.723 00:32:44.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.665 Nvme0n1 : 7.00 20907.00 81.67 0.00 0.00 0.00 0.00 0.00 00:32:44.665 [2024-11-20T16:15:36.841Z] =================================================================================================================== 00:32:44.665 [2024-11-20T16:15:36.841Z] Total : 20907.00 81.67 0.00 0.00 0.00 0.00 0.00 00:32:44.665 00:32:45.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.607 Nvme0n1 : 8.00 21468.88 83.86 0.00 0.00 0.00 0.00 0.00 00:32:45.607 [2024-11-20T16:15:37.783Z] =================================================================================================================== 00:32:45.607 [2024-11-20T16:15:37.783Z] Total : 21468.88 83.86 0.00 0.00 0.00 0.00 0.00 00:32:45.607 00:32:46.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.549 Nvme0n1 : 9.00 21912.78 85.60 0.00 0.00 0.00 0.00 0.00 00:32:46.549 [2024-11-20T16:15:38.725Z] =================================================================================================================== 00:32:46.549 [2024-11-20T16:15:38.725Z] Total : 21912.78 85.60 0.00 0.00 0.00 0.00 0.00 00:32:46.549 00:32:47.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.511 Nvme0n1 : 10.00 22272.70 87.00 0.00 0.00 0.00 0.00 0.00 00:32:47.511 [2024-11-20T16:15:39.687Z] =================================================================================================================== 00:32:47.511 [2024-11-20T16:15:39.687Z] Total : 22272.70 87.00 0.00 0.00 0.00 0.00 0.00 00:32:47.511 00:32:47.511 00:32:47.511 Latency(us) 00:32:47.511 [2024-11-20T16:15:39.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.511 Nvme0n1 : 10.00 22267.13 86.98 0.00 0.00 5744.50 2880.85 28617.39 00:32:47.511 [2024-11-20T16:15:39.687Z] =================================================================================================================== 00:32:47.511 [2024-11-20T16:15:39.687Z] Total : 22267.13 86.98 0.00 0.00 5744.50 2880.85 28617.39 00:32:47.511 { 00:32:47.511 "results": [ 00:32:47.511 { 00:32:47.511 "job": "Nvme0n1", 00:32:47.511 "core_mask": "0x2", 00:32:47.511 "workload": "randwrite", 00:32:47.511 "status": "finished", 00:32:47.511 "queue_depth": 128, 00:32:47.511 "io_size": 4096, 00:32:47.511 "runtime": 10.002548, 00:32:47.511 "iops": 22267.126336209534, 00:32:47.511 "mibps": 86.9809622508185, 00:32:47.511 "io_failed": 0, 00:32:47.511 "io_timeout": 0, 00:32:47.511 "avg_latency_us": 5744.496434275588, 00:32:47.511 "min_latency_us": 2880.8533333333335, 00:32:47.511 "max_latency_us": 28617.386666666665 00:32:47.511 } 00:32:47.511 ], 00:32:47.511 "core_count": 1 00:32:47.511 } 00:32:47.511 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2196060 00:32:47.511 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2196060 ']' 00:32:47.511 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2196060 00:32:47.511 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:47.511 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.511 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196060 00:32:47.771 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.771 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.771 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196060' 00:32:47.771 killing process with pid 2196060 00:32:47.771 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2196060 00:32:47.771 Received shutdown signal, test time was about 10.000000 seconds 00:32:47.771 00:32:47.771 Latency(us) 00:32:47.771 [2024-11-20T16:15:39.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.771 [2024-11-20T16:15:39.947Z] =================================================================================================================== 00:32:47.771 [2024-11-20T16:15:39.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.771 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2196060 00:32:47.771 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.032 17:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:48.032 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:48.032 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:48.292 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:48.292 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:48.292 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:48.292 [2024-11-20 17:15:40.458333] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.554 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:48.555 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:48.555 request: 00:32:48.555 { 00:32:48.555 "uuid": "1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec", 00:32:48.555 "method": "bdev_lvol_get_lvstores", 00:32:48.555 "req_id": 1 00:32:48.555 } 00:32:48.555 Got JSON-RPC error response 00:32:48.555 response: 00:32:48.555 { 00:32:48.555 "code": -19, 00:32:48.555 "message": "No such device" 00:32:48.555 } 00:32:48.555 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:48.555 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:48.555 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:48.555 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:48.555 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:48.816 aio_bdev 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61a8b2ef-066b-4d91-a250-54027d858c00 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=61a8b2ef-066b-4d91-a250-54027d858c00 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:48.816 17:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:49.077 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61a8b2ef-066b-4d91-a250-54027d858c00 -t 2000 00:32:49.077 [ 00:32:49.077 { 00:32:49.077 "name": "61a8b2ef-066b-4d91-a250-54027d858c00", 00:32:49.077 "aliases": [ 00:32:49.077 "lvs/lvol" 00:32:49.077 ], 00:32:49.077 "product_name": "Logical Volume", 00:32:49.077 "block_size": 4096, 00:32:49.077 "num_blocks": 38912, 00:32:49.077 "uuid": "61a8b2ef-066b-4d91-a250-54027d858c00", 00:32:49.077 "assigned_rate_limits": { 00:32:49.077 "rw_ios_per_sec": 0, 00:32:49.077 "rw_mbytes_per_sec": 0, 00:32:49.077 "r_mbytes_per_sec": 0, 00:32:49.077 "w_mbytes_per_sec": 0 00:32:49.077 }, 00:32:49.077 "claimed": false, 00:32:49.077 "zoned": false, 00:32:49.077 "supported_io_types": { 00:32:49.077 "read": true, 00:32:49.077 "write": true, 00:32:49.077 "unmap": true, 00:32:49.077 "flush": false, 00:32:49.077 "reset": true, 00:32:49.077 "nvme_admin": false, 00:32:49.077 "nvme_io": false, 00:32:49.077 "nvme_io_md": false, 00:32:49.077 "write_zeroes": true, 00:32:49.077 "zcopy": false, 00:32:49.077 "get_zone_info": false, 00:32:49.077 "zone_management": false, 00:32:49.077 "zone_append": false, 00:32:49.077 "compare": false, 00:32:49.077 "compare_and_write": false, 00:32:49.077 "abort": false, 00:32:49.077 "seek_hole": true, 00:32:49.077 "seek_data": true, 00:32:49.077 "copy": false, 00:32:49.077 "nvme_iov_md": false 00:32:49.077 }, 00:32:49.077 "driver_specific": { 00:32:49.077 "lvol": { 00:32:49.077 "lvol_store_uuid": "1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec", 00:32:49.077 "base_bdev": "aio_bdev", 00:32:49.077 "thin_provision": false, 00:32:49.077 "num_allocated_clusters": 38, 00:32:49.077 "snapshot": false, 00:32:49.077 "clone": false, 00:32:49.077 "esnap_clone": false 00:32:49.077 } 00:32:49.077 } 00:32:49.077 } 00:32:49.077 ] 00:32:49.077 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:49.077 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:49.077 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:49.337 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:49.337 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:49.337 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:49.598 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:49.598 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61a8b2ef-066b-4d91-a250-54027d858c00 00:32:49.598 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cb8d4df-e531-4b1d-88cf-e17d4cdfc3ec 00:32:49.859 17:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:50.120 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.121 00:32:50.121 real 0m15.815s 00:32:50.121 user 0m15.537s 00:32:50.121 sys 0m1.368s 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.121 ************************************ 00:32:50.121 END TEST lvs_grow_clean 00:32:50.121 ************************************ 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:50.121 ************************************ 00:32:50.121 START TEST lvs_grow_dirty 00:32:50.121 ************************************ 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.121 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:50.382 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:50.382 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:50.643 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=442132b7-ab77-41e4-baab-7415cb783a51 00:32:50.643 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:32:50.643 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:50.643 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:50.643 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:50.643 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 442132b7-ab77-41e4-baab-7415cb783a51 lvol 150 00:32:50.903 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:32:50.903 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.903 17:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:50.903 [2024-11-20 17:15:43.066258] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:50.903 [2024-11-20 17:15:43.066413] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:50.903 true 00:32:51.164 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:32:51.164 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:51.164 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:51.164 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:51.425 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:32:51.686 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:51.686 [2024-11-20 17:15:43.742729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.686 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2198983 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2198983 /var/tmp/bdevperf.sock 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2198983 ']' 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:51.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.947 17:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:51.947 [2024-11-20 17:15:43.975561] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:32:51.947 [2024-11-20 17:15:43.975615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198983 ] 00:32:51.947 [2024-11-20 17:15:44.056606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.947 [2024-11-20 17:15:44.086552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.887 17:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.888 17:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:52.888 17:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:52.888 Nvme0n1 00:32:53.148 17:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:53.148 [ 00:32:53.148 { 00:32:53.148 "name": "Nvme0n1", 00:32:53.148 "aliases": [ 00:32:53.148 "c129d0e6-a15f-4db6-9d41-bc48a28f2270" 00:32:53.148 ], 00:32:53.148 "product_name": "NVMe disk", 00:32:53.148 "block_size": 4096, 00:32:53.148 "num_blocks": 38912, 00:32:53.148 "uuid": "c129d0e6-a15f-4db6-9d41-bc48a28f2270", 00:32:53.148 "numa_id": 0, 00:32:53.148 "assigned_rate_limits": { 00:32:53.148 "rw_ios_per_sec": 0, 00:32:53.148 "rw_mbytes_per_sec": 0, 00:32:53.148 "r_mbytes_per_sec": 0, 00:32:53.148 "w_mbytes_per_sec": 0 00:32:53.148 }, 00:32:53.148 "claimed": false, 00:32:53.148 "zoned": false, 00:32:53.148 "supported_io_types": { 00:32:53.148 "read": true, 00:32:53.148 "write": true, 00:32:53.148 "unmap": true, 00:32:53.148 "flush": true, 00:32:53.148 "reset": true, 00:32:53.148 "nvme_admin": true, 00:32:53.148 "nvme_io": true, 00:32:53.148 "nvme_io_md": false, 00:32:53.148 "write_zeroes": true, 00:32:53.148 "zcopy": false, 00:32:53.148 "get_zone_info": false, 00:32:53.148 "zone_management": false, 00:32:53.148 "zone_append": false, 00:32:53.148 "compare": true, 00:32:53.148 "compare_and_write": true, 00:32:53.148 "abort": true, 00:32:53.148 "seek_hole": false, 00:32:53.148 "seek_data": false, 00:32:53.148 "copy": true, 00:32:53.148 "nvme_iov_md": false 00:32:53.148 }, 00:32:53.148 "memory_domains": [ 00:32:53.148 { 00:32:53.148 "dma_device_id": "system", 00:32:53.148 "dma_device_type": 1 00:32:53.148 } 00:32:53.148 ], 00:32:53.148 "driver_specific": { 00:32:53.148 "nvme": [ 00:32:53.148 { 00:32:53.148 "trid": { 00:32:53.148 "trtype": "TCP", 00:32:53.148 "adrfam": "IPv4", 00:32:53.148 "traddr": "10.0.0.2", 00:32:53.148 "trsvcid": "4420", 00:32:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:53.148 }, 00:32:53.148 "ctrlr_data": { 00:32:53.148 "cntlid": 1, 00:32:53.148 "vendor_id": "0x8086", 00:32:53.148 "model_number": "SPDK bdev Controller", 00:32:53.148 "serial_number": "SPDK0", 00:32:53.148 "firmware_revision": "25.01", 00:32:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.148 "oacs": { 00:32:53.148 "security": 0, 00:32:53.148 "format": 0, 00:32:53.148 "firmware": 0, 00:32:53.148 "ns_manage": 0 00:32:53.148 }, 00:32:53.148 "multi_ctrlr": true, 00:32:53.148 "ana_reporting": false 00:32:53.148 }, 00:32:53.148 "vs": { 00:32:53.148 "nvme_version": "1.3" 00:32:53.148 }, 00:32:53.148 "ns_data": { 00:32:53.148 "id": 1, 00:32:53.148 "can_share": true 00:32:53.148 } 00:32:53.148 } 00:32:53.148 ], 00:32:53.148 "mp_policy": "active_passive" 00:32:53.148 } 00:32:53.148 } 00:32:53.148 ] 00:32:53.149 17:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:53.149 17:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2199300 00:32:53.149 17:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:53.149 Running I/O for 10 seconds... 00:32:54.630 Latency(us) 00:32:54.630 [2024-11-20T16:15:46.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.630 Nvme0n1 : 1.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:32:54.630 [2024-11-20T16:15:46.806Z] =================================================================================================================== 00:32:54.630 [2024-11-20T16:15:46.806Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:32:54.630 00:32:55.203 17:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 442132b7-ab77-41e4-baab-7415cb783a51 00:32:55.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.203 Nvme0n1 : 2.00 17693.50 69.12 0.00 0.00 0.00 0.00 0.00 00:32:55.203 [2024-11-20T16:15:47.379Z] =================================================================================================================== 00:32:55.203 [2024-11-20T16:15:47.380Z] Total : 17693.50 69.12 0.00 0.00 0.00 0.00 0.00 00:32:55.204 00:32:55.464 true 00:32:55.464 17:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:32:55.464 17:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:55.464 17:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:55.464 17:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:55.464 17:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2199300 00:32:56.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.409 Nvme0n1 : 3.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:32:56.409 [2024-11-20T16:15:48.585Z] =================================================================================================================== 00:32:56.409 [2024-11-20T16:15:48.585Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:32:56.409 00:32:57.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.351 Nvme0n1 : 4.00 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:32:57.351 [2024-11-20T16:15:49.527Z] =================================================================================================================== 00:32:57.351 [2024-11-20T16:15:49.527Z] Total : 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:32:57.351 00:32:58.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.295 Nvme0n1 : 5.00 18783.40 73.37 0.00 0.00 0.00 0.00 0.00 00:32:58.295 [2024-11-20T16:15:50.471Z] =================================================================================================================== 00:32:58.295 [2024-11-20T16:15:50.471Z] Total : 18783.40 73.37 0.00 0.00 0.00 0.00 0.00 00:32:58.295 00:32:59.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.237 Nvme0n1 : 6.00 19883.67 77.67 0.00 0.00 0.00 0.00 0.00 00:32:59.237 [2024-11-20T16:15:51.413Z] =================================================================================================================== 00:32:59.237 [2024-11-20T16:15:51.413Z] Total : 19883.67 77.67 0.00 0.00 0.00 0.00 0.00 00:32:59.237 00:33:00.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.214 Nvme0n1 : 7.00 20671.71 80.75 0.00 0.00 0.00 0.00 0.00 00:33:00.214 [2024-11-20T16:15:52.390Z] =================================================================================================================== 00:33:00.214 [2024-11-20T16:15:52.390Z] Total : 20671.71 80.75 0.00 0.00 0.00 0.00 0.00 00:33:00.214 00:33:01.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.600 Nvme0n1 : 8.00 21262.75 83.06 0.00 0.00 0.00 0.00 0.00 00:33:01.600 [2024-11-20T16:15:53.776Z] =================================================================================================================== 00:33:01.600 [2024-11-20T16:15:53.776Z] Total : 21262.75 83.06 0.00 0.00 0.00 0.00 0.00 00:33:01.600 00:33:02.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.172 Nvme0n1 : 9.00 21729.56 84.88 0.00 0.00 0.00 0.00 0.00 00:33:02.172 [2024-11-20T16:15:54.348Z] =================================================================================================================== 00:33:02.172 [2024-11-20T16:15:54.348Z] Total : 21729.56 84.88 0.00 0.00 0.00 0.00 0.00 00:33:02.172 00:33:03.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.556 Nvme0n1 : 10.00 22103.10 86.34 0.00 0.00 0.00 0.00 0.00 00:33:03.556 [2024-11-20T16:15:55.732Z] =================================================================================================================== 00:33:03.556 [2024-11-20T16:15:55.732Z] Total : 22103.10 86.34 0.00 0.00 0.00 0.00 0.00 00:33:03.556 00:33:03.556 00:33:03.556 Latency(us) 00:33:03.556 [2024-11-20T16:15:55.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.556 Nvme0n1 : 10.00 22105.70 86.35 0.00 0.00 5787.48 2880.85 30583.47 00:33:03.556 [2024-11-20T16:15:55.732Z] =================================================================================================================== 00:33:03.556 [2024-11-20T16:15:55.732Z] Total : 22105.70 86.35 0.00 0.00 5787.48 2880.85 30583.47 00:33:03.556 { 00:33:03.556 "results": [ 00:33:03.556 { 00:33:03.556 "job": "Nvme0n1", 00:33:03.556 "core_mask": "0x2", 00:33:03.556 "workload": "randwrite", 00:33:03.556 "status": "finished", 00:33:03.556 "queue_depth": 128, 00:33:03.556 "io_size": 4096, 00:33:03.556 "runtime": 10.004616, 00:33:03.556 "iops": 22105.69601072145, 00:33:03.556 "mibps": 86.35037504188067, 00:33:03.556 "io_failed": 0, 00:33:03.556 "io_timeout": 0, 00:33:03.556 "avg_latency_us": 5787.481802202638, 00:33:03.556 "min_latency_us": 2880.8533333333335, 00:33:03.556 "max_latency_us": 30583.466666666667 00:33:03.556 } 00:33:03.556 ], 00:33:03.556 "core_count": 1 00:33:03.556 } 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2198983 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2198983 ']' 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2198983 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198983 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198983' 00:33:03.556 killing process with pid 2198983 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2198983 00:33:03.556 Received shutdown signal, test time was about 10.000000 seconds 00:33:03.556 00:33:03.556 Latency(us) 00:33:03.556 [2024-11-20T16:15:55.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.556 [2024-11-20T16:15:55.732Z] =================================================================================================================== 00:33:03.556 [2024-11-20T16:15:55.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2198983 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:03.556 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:03.816 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:03.816 17:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2195494 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2195494 00:33:04.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2195494 Killed "${NVMF_APP[@]}" "$@" 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2201324 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2201324 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2201324 ']' 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.077 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.077 [2024-11-20 17:15:56.164770] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:04.077 [2024-11-20 17:15:56.165783] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:04.077 [2024-11-20 17:15:56.165827] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.337 [2024-11-20 17:15:56.257682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.337 [2024-11-20 17:15:56.287981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.337 [2024-11-20 17:15:56.288008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.337 [2024-11-20 17:15:56.288014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.337 [2024-11-20 17:15:56.288018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.338 [2024-11-20 17:15:56.288023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.338 [2024-11-20 17:15:56.288460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.338 [2024-11-20 17:15:56.339636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:04.338 [2024-11-20 17:15:56.339826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:04.911 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.911 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:04.911 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.911 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.911 17:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.911 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.911 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:05.171 [2024-11-20 17:15:57.170646] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:05.171 [2024-11-20 17:15:57.170873] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:05.171 [2024-11-20 17:15:57.170963] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:05.171 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:05.171 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:33:05.171 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:33:05.171 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.172 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:05.172 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.172 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.172 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:05.432 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c129d0e6-a15f-4db6-9d41-bc48a28f2270 -t 2000 00:33:05.432 [ 00:33:05.432 { 00:33:05.432 "name": "c129d0e6-a15f-4db6-9d41-bc48a28f2270", 00:33:05.432 "aliases": [ 00:33:05.433 "lvs/lvol" 00:33:05.433 ], 00:33:05.433 "product_name": "Logical Volume", 00:33:05.433 "block_size": 4096, 00:33:05.433 "num_blocks": 38912, 00:33:05.433 "uuid": "c129d0e6-a15f-4db6-9d41-bc48a28f2270", 00:33:05.433 "assigned_rate_limits": { 00:33:05.433 "rw_ios_per_sec": 0, 00:33:05.433 "rw_mbytes_per_sec": 0, 00:33:05.433 "r_mbytes_per_sec": 0, 00:33:05.433 "w_mbytes_per_sec": 0 00:33:05.433 }, 00:33:05.433 "claimed": false, 00:33:05.433 "zoned": false, 00:33:05.433 "supported_io_types": { 00:33:05.433 "read": true, 00:33:05.433 "write": true, 00:33:05.433 "unmap": true, 00:33:05.433 "flush": false, 00:33:05.433 "reset": true, 00:33:05.433 "nvme_admin": false, 00:33:05.433 "nvme_io": false, 00:33:05.433 "nvme_io_md": false, 00:33:05.433 "write_zeroes": true, 00:33:05.433 "zcopy": false, 00:33:05.433 "get_zone_info": false, 00:33:05.433 "zone_management": false, 00:33:05.433 "zone_append": false, 00:33:05.433 "compare": false, 00:33:05.433 "compare_and_write": false, 00:33:05.433 "abort": false, 00:33:05.433 "seek_hole": true, 00:33:05.433 "seek_data": true, 00:33:05.433 "copy": false, 00:33:05.433 "nvme_iov_md": false 00:33:05.433 }, 00:33:05.433 "driver_specific": { 00:33:05.433 "lvol": { 00:33:05.433 "lvol_store_uuid": "442132b7-ab77-41e4-baab-7415cb783a51", 00:33:05.433 "base_bdev": "aio_bdev", 00:33:05.433 "thin_provision": false, 00:33:05.433 "num_allocated_clusters": 38, 00:33:05.433 "snapshot": false, 00:33:05.433 "clone": false, 00:33:05.433 "esnap_clone": false 00:33:05.433 } 00:33:05.433 } 00:33:05.433 } 00:33:05.433 ] 00:33:05.433 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:05.433 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:05.433 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:05.693 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:05.693 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:05.693 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:05.954 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:05.954 17:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:05.954 [2024-11-20 17:15:58.028920] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:05.954 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:06.215 request: 00:33:06.215 { 00:33:06.215 "uuid": "442132b7-ab77-41e4-baab-7415cb783a51", 00:33:06.215 "method": "bdev_lvol_get_lvstores", 00:33:06.215 "req_id": 1 00:33:06.215 } 00:33:06.215 Got JSON-RPC error response 00:33:06.215 response: 00:33:06.215 { 00:33:06.215 "code": -19, 00:33:06.215 "message": "No such device" 00:33:06.215 } 00:33:06.215 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:06.215 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:06.215 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:06.215 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:06.215 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:06.476 aio_bdev 00:33:06.476 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:33:06.476 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:33:06.476 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:06.476 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:06.477 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:06.477 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:06.477 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:06.477 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c129d0e6-a15f-4db6-9d41-bc48a28f2270 -t 2000 00:33:06.738 [ 00:33:06.738 { 00:33:06.738 "name": "c129d0e6-a15f-4db6-9d41-bc48a28f2270", 00:33:06.738 "aliases": [ 00:33:06.738 "lvs/lvol" 00:33:06.738 ], 00:33:06.738 "product_name": "Logical Volume", 00:33:06.738 "block_size": 4096, 00:33:06.738 "num_blocks": 38912, 00:33:06.738 "uuid": "c129d0e6-a15f-4db6-9d41-bc48a28f2270", 00:33:06.738 "assigned_rate_limits": { 00:33:06.738 "rw_ios_per_sec": 0, 00:33:06.738 "rw_mbytes_per_sec": 0, 00:33:06.738 "r_mbytes_per_sec": 0, 00:33:06.738 "w_mbytes_per_sec": 0 00:33:06.738 }, 00:33:06.738 "claimed": false, 00:33:06.738 "zoned": false, 00:33:06.738 "supported_io_types": { 00:33:06.738 "read": true, 00:33:06.738 "write": true, 00:33:06.738 "unmap": true, 00:33:06.738 "flush": false, 00:33:06.738 "reset": true, 00:33:06.738 "nvme_admin": false, 00:33:06.738 "nvme_io": false, 00:33:06.738 "nvme_io_md": false, 00:33:06.738 "write_zeroes": true, 00:33:06.738 "zcopy": false, 00:33:06.738 "get_zone_info": false, 00:33:06.738 "zone_management": false, 00:33:06.738 "zone_append": false, 00:33:06.738 "compare": false, 00:33:06.738 "compare_and_write": false, 00:33:06.738 "abort": false, 00:33:06.738 "seek_hole": true, 00:33:06.738 "seek_data": true, 00:33:06.738 "copy": false, 00:33:06.738 "nvme_iov_md": false 00:33:06.738 }, 00:33:06.738 "driver_specific": { 00:33:06.738 "lvol": { 00:33:06.738 "lvol_store_uuid": "442132b7-ab77-41e4-baab-7415cb783a51", 00:33:06.738 "base_bdev": "aio_bdev", 00:33:06.738 "thin_provision": false, 00:33:06.738 "num_allocated_clusters": 38, 00:33:06.738 "snapshot": false, 00:33:06.738 "clone": false, 00:33:06.738 "esnap_clone": false 00:33:06.738 } 00:33:06.738 } 00:33:06.738 } 00:33:06.738 ] 00:33:06.738 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:06.738 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:06.738 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:06.999 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:06.999 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:07.000 17:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:07.000 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:07.000 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c129d0e6-a15f-4db6-9d41-bc48a28f2270 00:33:07.260 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 442132b7-ab77-41e4-baab-7415cb783a51 00:33:07.521 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:07.521 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:07.521 00:33:07.521 real 0m17.503s 00:33:07.521 user 0m35.576s 00:33:07.521 sys 0m2.915s 00:33:07.521 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.521 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:07.521 ************************************ 00:33:07.521 END TEST lvs_grow_dirty 00:33:07.521 ************************************ 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:07.781 nvmf_trace.0 00:33:07.781 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.782 rmmod nvme_tcp 00:33:07.782 rmmod nvme_fabrics 00:33:07.782 rmmod nvme_keyring 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2201324 ']' 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2201324 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2201324 ']' 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2201324 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201324 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201324' 00:33:07.782 killing process with pid 2201324 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2201324 00:33:07.782 17:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2201324 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.042 17:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.954 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.954 00:33:09.954 real 0m44.554s 00:33:09.954 user 0m54.025s 00:33:09.954 sys 0m10.339s 00:33:09.954 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.954 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:09.954 ************************************ 00:33:09.954 END TEST nvmf_lvs_grow 00:33:09.954 ************************************ 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:10.215 ************************************ 00:33:10.215 START TEST nvmf_bdev_io_wait 00:33:10.215 ************************************ 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:10.215 * Looking for test storage... 00:33:10.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:10.215 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:10.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.477 --rc genhtml_branch_coverage=1 00:33:10.477 --rc genhtml_function_coverage=1 00:33:10.477 --rc genhtml_legend=1 00:33:10.477 --rc geninfo_all_blocks=1 00:33:10.477 --rc geninfo_unexecuted_blocks=1 00:33:10.477 00:33:10.477 ' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:10.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.477 --rc genhtml_branch_coverage=1 00:33:10.477 --rc genhtml_function_coverage=1 00:33:10.477 --rc genhtml_legend=1 00:33:10.477 --rc geninfo_all_blocks=1 00:33:10.477 --rc geninfo_unexecuted_blocks=1 00:33:10.477 00:33:10.477 ' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:10.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.477 --rc genhtml_branch_coverage=1 00:33:10.477 --rc genhtml_function_coverage=1 00:33:10.477 --rc genhtml_legend=1 00:33:10.477 --rc geninfo_all_blocks=1 00:33:10.477 --rc geninfo_unexecuted_blocks=1 00:33:10.477 00:33:10.477 ' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:10.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.477 --rc genhtml_branch_coverage=1 00:33:10.477 --rc genhtml_function_coverage=1 00:33:10.477 --rc genhtml_legend=1 00:33:10.477 --rc geninfo_all_blocks=1 00:33:10.477 --rc geninfo_unexecuted_blocks=1 00:33:10.477 00:33:10.477 ' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.477 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.478 17:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:18.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:18.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:18.618 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:18.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:18.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:33:18.619 00:33:18.619 --- 10.0.0.2 ping statistics --- 00:33:18.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.619 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:33:18.619 00:33:18.619 --- 10.0.0.1 ping statistics --- 00:33:18.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.619 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2206352 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2206352 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2206352 ']' 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.619 17:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.619 [2024-11-20 17:16:10.009540] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:18.619 [2024-11-20 17:16:10.010675] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:18.619 [2024-11-20 17:16:10.010727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.620 [2024-11-20 17:16:10.115571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:18.620 [2024-11-20 17:16:10.172114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.620 [2024-11-20 17:16:10.172187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.620 [2024-11-20 17:16:10.172196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.620 [2024-11-20 17:16:10.172203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.620 [2024-11-20 17:16:10.172210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.620 [2024-11-20 17:16:10.174223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.620 [2024-11-20 17:16:10.174328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:18.620 [2024-11-20 17:16:10.174365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:18.620 [2024-11-20 17:16:10.174369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.620 [2024-11-20 17:16:10.175107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:18.880 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 [2024-11-20 17:16:10.955256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:18.881 [2024-11-20 17:16:10.955847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:18.881 [2024-11-20 17:16:10.955962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:18.881 [2024-11-20 17:16:10.956189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 [2024-11-20 17:16:10.967636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 Malloc0 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 [2024-11-20 17:16:11.043963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2206416 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2206418 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.881 { 00:33:18.881 "params": { 00:33:18.881 "name": "Nvme$subsystem", 00:33:18.881 "trtype": "$TEST_TRANSPORT", 00:33:18.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.881 "adrfam": "ipv4", 00:33:18.881 "trsvcid": "$NVMF_PORT", 00:33:18.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.881 "hdgst": ${hdgst:-false}, 00:33:18.881 "ddgst": ${ddgst:-false} 00:33:18.881 }, 00:33:18.881 "method": "bdev_nvme_attach_controller" 00:33:18.881 } 00:33:18.881 EOF 00:33:18.881 )") 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2206420 00:33:18.881 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:19.143 { 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme$subsystem", 00:33:19.143 "trtype": "$TEST_TRANSPORT", 00:33:19.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "$NVMF_PORT", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.143 "hdgst": ${hdgst:-false}, 00:33:19.143 "ddgst": ${ddgst:-false} 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 } 00:33:19.143 EOF 00:33:19.143 )") 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2206423 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:19.143 { 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme$subsystem", 00:33:19.143 "trtype": "$TEST_TRANSPORT", 00:33:19.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "$NVMF_PORT", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.143 "hdgst": ${hdgst:-false}, 00:33:19.143 "ddgst": ${ddgst:-false} 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 } 00:33:19.143 EOF 00:33:19.143 )") 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:19.143 { 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme$subsystem", 00:33:19.143 "trtype": "$TEST_TRANSPORT", 00:33:19.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "$NVMF_PORT", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.143 "hdgst": ${hdgst:-false}, 00:33:19.143 "ddgst": ${ddgst:-false} 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 } 00:33:19.143 EOF 00:33:19.143 )") 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2206416 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme1", 00:33:19.143 "trtype": "tcp", 00:33:19.143 "traddr": "10.0.0.2", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "4420", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:19.143 "hdgst": false, 00:33:19.143 "ddgst": false 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 }' 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme1", 00:33:19.143 "trtype": "tcp", 00:33:19.143 "traddr": "10.0.0.2", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "4420", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:19.143 "hdgst": false, 00:33:19.143 "ddgst": false 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 }' 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme1", 00:33:19.143 "trtype": "tcp", 00:33:19.143 "traddr": "10.0.0.2", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "4420", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:19.143 "hdgst": false, 00:33:19.143 "ddgst": false 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 }' 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:19.143 17:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:19.143 "params": { 00:33:19.143 "name": "Nvme1", 00:33:19.143 "trtype": "tcp", 00:33:19.143 "traddr": "10.0.0.2", 00:33:19.143 "adrfam": "ipv4", 00:33:19.143 "trsvcid": "4420", 00:33:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:19.143 "hdgst": false, 00:33:19.143 "ddgst": false 00:33:19.143 }, 00:33:19.143 "method": "bdev_nvme_attach_controller" 00:33:19.143 }' 00:33:19.143 [2024-11-20 17:16:11.102030] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:19.143 [2024-11-20 17:16:11.102096] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:19.143 [2024-11-20 17:16:11.106011] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:19.143 [2024-11-20 17:16:11.106072] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:19.143 [2024-11-20 17:16:11.106736] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:19.143 [2024-11-20 17:16:11.106791] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:19.143 [2024-11-20 17:16:11.108523] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:19.143 [2024-11-20 17:16:11.108603] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:19.143 [2024-11-20 17:16:11.293481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.405 [2024-11-20 17:16:11.330923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:19.405 [2024-11-20 17:16:11.355191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.405 [2024-11-20 17:16:11.394546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:19.405 [2024-11-20 17:16:11.422096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.405 [2024-11-20 17:16:11.461586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:19.405 [2024-11-20 17:16:11.514926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.405 [2024-11-20 17:16:11.555284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:19.666 Running I/O for 1 seconds... 00:33:19.666 Running I/O for 1 seconds... 00:33:19.666 Running I/O for 1 seconds... 00:33:19.666 Running I/O for 1 seconds... 00:33:20.636 12027.00 IOPS, 46.98 MiB/s 00:33:20.636 Latency(us) 00:33:20.636 [2024-11-20T16:16:12.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.636 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:20.636 Nvme1n1 : 1.01 12087.88 47.22 0.00 0.00 10552.89 4915.20 13271.04 00:33:20.636 [2024-11-20T16:16:12.812Z] =================================================================================================================== 00:33:20.636 [2024-11-20T16:16:12.812Z] Total : 12087.88 47.22 0.00 0.00 10552.89 4915.20 13271.04 00:33:20.636 9376.00 IOPS, 36.62 MiB/s 00:33:20.636 Latency(us) 00:33:20.636 [2024-11-20T16:16:12.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.636 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:20.636 Nvme1n1 : 1.01 9427.52 36.83 0.00 0.00 13520.68 5843.63 17148.59 00:33:20.636 [2024-11-20T16:16:12.812Z] =================================================================================================================== 00:33:20.636 [2024-11-20T16:16:12.812Z] Total : 9427.52 36.83 0.00 0.00 13520.68 5843.63 17148.59 00:33:20.897 9637.00 IOPS, 37.64 MiB/s 00:33:20.897 Latency(us) 00:33:20.897 [2024-11-20T16:16:13.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.897 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:20.897 Nvme1n1 : 1.01 9729.76 38.01 0.00 0.00 13112.72 2949.12 22173.01 00:33:20.897 [2024-11-20T16:16:13.073Z] =================================================================================================================== 00:33:20.897 [2024-11-20T16:16:13.073Z] Total : 9729.76 38.01 0.00 0.00 13112.72 2949.12 22173.01 00:33:20.897 182336.00 IOPS, 712.25 MiB/s 00:33:20.897 Latency(us) 00:33:20.897 [2024-11-20T16:16:13.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.897 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:20.897 Nvme1n1 : 1.00 181975.68 710.84 0.00 0.00 699.40 295.25 1966.08 00:33:20.897 [2024-11-20T16:16:13.073Z] =================================================================================================================== 00:33:20.897 [2024-11-20T16:16:13.073Z] Total : 181975.68 710.84 0.00 0.00 699.40 295.25 1966.08 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2206418 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2206420 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2206423 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.897 17:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.897 rmmod nvme_tcp 00:33:20.897 rmmod nvme_fabrics 00:33:20.897 rmmod nvme_keyring 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2206352 ']' 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2206352 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2206352 ']' 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2206352 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.897 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2206352 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2206352' 00:33:21.159 killing process with pid 2206352 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2206352 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2206352 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.159 17:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.708 00:33:23.708 real 0m13.121s 00:33:23.708 user 0m15.836s 00:33:23.708 sys 0m7.901s 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:23.708 ************************************ 00:33:23.708 END TEST nvmf_bdev_io_wait 00:33:23.708 ************************************ 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:23.708 ************************************ 00:33:23.708 START TEST nvmf_queue_depth 00:33:23.708 ************************************ 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:23.708 * Looking for test storage... 00:33:23.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.708 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.709 --rc genhtml_branch_coverage=1 00:33:23.709 --rc genhtml_function_coverage=1 00:33:23.709 --rc genhtml_legend=1 00:33:23.709 --rc geninfo_all_blocks=1 00:33:23.709 --rc geninfo_unexecuted_blocks=1 00:33:23.709 00:33:23.709 ' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.709 --rc genhtml_branch_coverage=1 00:33:23.709 --rc genhtml_function_coverage=1 00:33:23.709 --rc genhtml_legend=1 00:33:23.709 --rc geninfo_all_blocks=1 00:33:23.709 --rc geninfo_unexecuted_blocks=1 00:33:23.709 00:33:23.709 ' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.709 --rc genhtml_branch_coverage=1 00:33:23.709 --rc genhtml_function_coverage=1 00:33:23.709 --rc genhtml_legend=1 00:33:23.709 --rc geninfo_all_blocks=1 00:33:23.709 --rc geninfo_unexecuted_blocks=1 00:33:23.709 00:33:23.709 ' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.709 --rc genhtml_branch_coverage=1 00:33:23.709 --rc genhtml_function_coverage=1 00:33:23.709 --rc genhtml_legend=1 00:33:23.709 --rc geninfo_all_blocks=1 00:33:23.709 --rc geninfo_unexecuted_blocks=1 00:33:23.709 00:33:23.709 ' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.709 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.710 17:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:31.857 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:31.857 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:31.857 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:31.857 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.857 17:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:33:31.858 00:33:31.858 --- 10.0.0.2 ping statistics --- 00:33:31.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.858 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:33:31.858 00:33:31.858 --- 10.0.0.1 ping statistics --- 00:33:31.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.858 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2211089 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2211089 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2211089 ']' 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.858 17:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.858 [2024-11-20 17:16:23.236401] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.858 [2024-11-20 17:16:23.237497] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:31.858 [2024-11-20 17:16:23.237544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.858 [2024-11-20 17:16:23.340179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.858 [2024-11-20 17:16:23.390721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.858 [2024-11-20 17:16:23.390767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.858 [2024-11-20 17:16:23.390776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.858 [2024-11-20 17:16:23.390783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.858 [2024-11-20 17:16:23.390789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.858 [2024-11-20 17:16:23.391535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.858 [2024-11-20 17:16:23.469111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.858 [2024-11-20 17:16:23.469415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.119 [2024-11-20 17:16:24.108389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.119 Malloc0 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.119 [2024-11-20 17:16:24.192609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2211194 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2211194 /var/tmp/bdevperf.sock 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2211194 ']' 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:32.119 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.120 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:32.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:32.120 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.120 17:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.120 [2024-11-20 17:16:24.252462] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:33:32.120 [2024-11-20 17:16:24.252565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211194 ] 00:33:32.381 [2024-11-20 17:16:24.347828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.381 [2024-11-20 17:16:24.401068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.954 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.954 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:32.954 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:32.954 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.954 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:33.215 NVMe0n1 00:33:33.215 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.215 17:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:33.475 Running I/O for 10 seconds... 00:33:35.362 8199.00 IOPS, 32.03 MiB/s [2024-11-20T16:16:28.481Z] 8663.00 IOPS, 33.84 MiB/s [2024-11-20T16:16:29.863Z] 9219.00 IOPS, 36.01 MiB/s [2024-11-20T16:16:30.432Z] 10243.50 IOPS, 40.01 MiB/s [2024-11-20T16:16:31.815Z] 10852.60 IOPS, 42.39 MiB/s [2024-11-20T16:16:32.755Z] 11273.67 IOPS, 44.04 MiB/s [2024-11-20T16:16:33.697Z] 11591.43 IOPS, 45.28 MiB/s [2024-11-20T16:16:34.638Z] 11831.12 IOPS, 46.22 MiB/s [2024-11-20T16:16:35.580Z] 12033.56 IOPS, 47.01 MiB/s [2024-11-20T16:16:35.580Z] 12177.90 IOPS, 47.57 MiB/s 00:33:43.404 Latency(us) 00:33:43.404 [2024-11-20T16:16:35.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.404 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:43.404 Verification LBA range: start 0x0 length 0x4000 00:33:43.404 NVMe0n1 : 10.06 12201.62 47.66 0.00 0.00 83626.04 25668.27 76458.67 00:33:43.404 [2024-11-20T16:16:35.580Z] =================================================================================================================== 00:33:43.404 [2024-11-20T16:16:35.580Z] Total : 12201.62 47.66 0.00 0.00 83626.04 25668.27 76458.67 00:33:43.404 { 00:33:43.404 "results": [ 00:33:43.404 { 00:33:43.404 "job": "NVMe0n1", 00:33:43.404 "core_mask": "0x1", 00:33:43.404 "workload": "verify", 00:33:43.404 "status": "finished", 00:33:43.404 "verify_range": { 00:33:43.404 "start": 0, 00:33:43.404 "length": 16384 00:33:43.404 }, 00:33:43.404 "queue_depth": 1024, 00:33:43.404 "io_size": 4096, 00:33:43.404 "runtime": 10.060629, 00:33:43.404 "iops": 12201.622781239623, 00:33:43.404 "mibps": 47.66258898921728, 00:33:43.404 "io_failed": 0, 00:33:43.404 "io_timeout": 0, 00:33:43.404 "avg_latency_us": 83626.03847719597, 00:33:43.404 "min_latency_us": 25668.266666666666, 00:33:43.404 "max_latency_us": 76458.66666666667 00:33:43.404 } 00:33:43.404 ], 00:33:43.404 "core_count": 1 00:33:43.404 } 00:33:43.404 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2211194 00:33:43.404 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2211194 ']' 00:33:43.404 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2211194 00:33:43.404 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:43.404 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.404 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211194 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211194' 00:33:43.665 killing process with pid 2211194 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2211194 00:33:43.665 Received shutdown signal, test time was about 10.000000 seconds 00:33:43.665 00:33:43.665 Latency(us) 00:33:43.665 [2024-11-20T16:16:35.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.665 [2024-11-20T16:16:35.841Z] =================================================================================================================== 00:33:43.665 [2024-11-20T16:16:35.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2211194 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.665 rmmod nvme_tcp 00:33:43.665 rmmod nvme_fabrics 00:33:43.665 rmmod nvme_keyring 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2211089 ']' 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2211089 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2211089 ']' 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2211089 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211089 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211089' 00:33:43.665 killing process with pid 2211089 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2211089 00:33:43.665 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2211089 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.927 17:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.473 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:46.474 00:33:46.474 real 0m22.609s 00:33:46.474 user 0m24.896s 00:33:46.474 sys 0m7.489s 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:46.474 ************************************ 00:33:46.474 END TEST nvmf_queue_depth 00:33:46.474 ************************************ 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:46.474 ************************************ 00:33:46.474 START TEST nvmf_target_multipath 00:33:46.474 ************************************ 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:46.474 * Looking for test storage... 00:33:46.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.474 --rc genhtml_branch_coverage=1 00:33:46.474 --rc genhtml_function_coverage=1 00:33:46.474 --rc genhtml_legend=1 00:33:46.474 --rc geninfo_all_blocks=1 00:33:46.474 --rc geninfo_unexecuted_blocks=1 00:33:46.474 00:33:46.474 ' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.474 --rc genhtml_branch_coverage=1 00:33:46.474 --rc genhtml_function_coverage=1 00:33:46.474 --rc genhtml_legend=1 00:33:46.474 --rc geninfo_all_blocks=1 00:33:46.474 --rc geninfo_unexecuted_blocks=1 00:33:46.474 00:33:46.474 ' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.474 --rc genhtml_branch_coverage=1 00:33:46.474 --rc genhtml_function_coverage=1 00:33:46.474 --rc genhtml_legend=1 00:33:46.474 --rc geninfo_all_blocks=1 00:33:46.474 --rc geninfo_unexecuted_blocks=1 00:33:46.474 00:33:46.474 ' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.474 --rc genhtml_branch_coverage=1 00:33:46.474 --rc genhtml_function_coverage=1 00:33:46.474 --rc genhtml_legend=1 00:33:46.474 --rc geninfo_all_blocks=1 00:33:46.474 --rc geninfo_unexecuted_blocks=1 00:33:46.474 00:33:46.474 ' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.474 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.475 17:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:54.765 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:54.766 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:54.766 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:54.766 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:54.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:54.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:33:54.766 00:33:54.766 --- 10.0.0.2 ping statistics --- 00:33:54.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.766 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:54.766 00:33:54.766 --- 10.0.0.1 ping statistics --- 00:33:54.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.766 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:54.766 only one NIC for nvmf test 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.766 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.766 rmmod nvme_tcp 00:33:54.766 rmmod nvme_fabrics 00:33:54.767 rmmod nvme_keyring 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.767 17:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.153 17:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.153 00:33:56.153 real 0m9.939s 00:33:56.153 user 0m2.134s 00:33:56.153 sys 0m5.759s 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:56.153 ************************************ 00:33:56.153 END TEST nvmf_target_multipath 00:33:56.153 ************************************ 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.153 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:56.153 ************************************ 00:33:56.154 START TEST nvmf_zcopy 00:33:56.154 ************************************ 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:56.154 * Looking for test storage... 00:33:56.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:56.154 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:56.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.415 --rc genhtml_branch_coverage=1 00:33:56.415 --rc genhtml_function_coverage=1 00:33:56.415 --rc genhtml_legend=1 00:33:56.415 --rc geninfo_all_blocks=1 00:33:56.415 --rc geninfo_unexecuted_blocks=1 00:33:56.415 00:33:56.415 ' 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:56.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.415 --rc genhtml_branch_coverage=1 00:33:56.415 --rc genhtml_function_coverage=1 00:33:56.415 --rc genhtml_legend=1 00:33:56.415 --rc geninfo_all_blocks=1 00:33:56.415 --rc geninfo_unexecuted_blocks=1 00:33:56.415 00:33:56.415 ' 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:56.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.415 --rc genhtml_branch_coverage=1 00:33:56.415 --rc genhtml_function_coverage=1 00:33:56.415 --rc genhtml_legend=1 00:33:56.415 --rc geninfo_all_blocks=1 00:33:56.415 --rc geninfo_unexecuted_blocks=1 00:33:56.415 00:33:56.415 ' 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:56.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.415 --rc genhtml_branch_coverage=1 00:33:56.415 --rc genhtml_function_coverage=1 00:33:56.415 --rc genhtml_legend=1 00:33:56.415 --rc geninfo_all_blocks=1 00:33:56.415 --rc geninfo_unexecuted_blocks=1 00:33:56.415 00:33:56.415 ' 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.415 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.416 17:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.556 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:04.557 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:04.557 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:04.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:04.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.557 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:34:04.557 00:34:04.557 --- 10.0.0.2 ping statistics --- 00:34:04.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.558 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:34:04.558 00:34:04.558 --- 10.0.0.1 ping statistics --- 00:34:04.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.558 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2221800 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2221800 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2221800 ']' 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.558 17:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.558 [2024-11-20 17:16:55.921733] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:04.558 [2024-11-20 17:16:55.922866] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:34:04.558 [2024-11-20 17:16:55.922916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.558 [2024-11-20 17:16:56.023110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.558 [2024-11-20 17:16:56.073741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.558 [2024-11-20 17:16:56.073793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.558 [2024-11-20 17:16:56.073801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.558 [2024-11-20 17:16:56.073809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.558 [2024-11-20 17:16:56.073815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.558 [2024-11-20 17:16:56.074605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.558 [2024-11-20 17:16:56.152643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:04.558 [2024-11-20 17:16:56.152951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:04.819 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.820 [2024-11-20 17:16:56.803505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.820 [2024-11-20 17:16:56.831818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.820 malloc0 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.820 { 00:34:04.820 "params": { 00:34:04.820 "name": "Nvme$subsystem", 00:34:04.820 "trtype": "$TEST_TRANSPORT", 00:34:04.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.820 "adrfam": "ipv4", 00:34:04.820 "trsvcid": "$NVMF_PORT", 00:34:04.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.820 "hdgst": ${hdgst:-false}, 00:34:04.820 "ddgst": ${ddgst:-false} 00:34:04.820 }, 00:34:04.820 "method": "bdev_nvme_attach_controller" 00:34:04.820 } 00:34:04.820 EOF 00:34:04.820 )") 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:04.820 17:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:04.820 "params": { 00:34:04.820 "name": "Nvme1", 00:34:04.820 "trtype": "tcp", 00:34:04.820 "traddr": "10.0.0.2", 00:34:04.820 "adrfam": "ipv4", 00:34:04.820 "trsvcid": "4420", 00:34:04.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:04.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:04.820 "hdgst": false, 00:34:04.820 "ddgst": false 00:34:04.820 }, 00:34:04.820 "method": "bdev_nvme_attach_controller" 00:34:04.820 }' 00:34:04.820 [2024-11-20 17:16:56.935210] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:34:04.820 [2024-11-20 17:16:56.935281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221836 ] 00:34:05.082 [2024-11-20 17:16:57.027516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.082 [2024-11-20 17:16:57.080343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.344 Running I/O for 10 seconds... 00:34:07.228 6396.00 IOPS, 49.97 MiB/s [2024-11-20T16:17:00.789Z] 6458.50 IOPS, 50.46 MiB/s [2024-11-20T16:17:01.732Z] 6479.33 IOPS, 50.62 MiB/s [2024-11-20T16:17:02.673Z] 6490.00 IOPS, 50.70 MiB/s [2024-11-20T16:17:03.613Z] 6727.00 IOPS, 52.55 MiB/s [2024-11-20T16:17:04.552Z] 7216.50 IOPS, 56.38 MiB/s [2024-11-20T16:17:05.492Z] 7574.86 IOPS, 59.18 MiB/s [2024-11-20T16:17:06.433Z] 7834.88 IOPS, 61.21 MiB/s [2024-11-20T16:17:07.831Z] 8040.11 IOPS, 62.81 MiB/s [2024-11-20T16:17:07.831Z] 8204.90 IOPS, 64.10 MiB/s 00:34:15.655 Latency(us) 00:34:15.655 [2024-11-20T16:17:07.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.655 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:15.655 Verification LBA range: start 0x0 length 0x1000 00:34:15.655 Nvme1n1 : 10.01 8208.54 64.13 0.00 0.00 15548.68 907.95 27962.03 00:34:15.655 [2024-11-20T16:17:07.831Z] =================================================================================================================== 00:34:15.655 [2024-11-20T16:17:07.831Z] Total : 8208.54 64.13 0.00 0.00 15548.68 907.95 27962.03 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2223830 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:15.655 { 00:34:15.655 "params": { 00:34:15.655 "name": "Nvme$subsystem", 00:34:15.655 "trtype": "$TEST_TRANSPORT", 00:34:15.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.655 "adrfam": "ipv4", 00:34:15.655 "trsvcid": "$NVMF_PORT", 00:34:15.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.655 "hdgst": ${hdgst:-false}, 00:34:15.655 "ddgst": ${ddgst:-false} 00:34:15.655 }, 00:34:15.655 "method": "bdev_nvme_attach_controller" 00:34:15.655 } 00:34:15.655 EOF 00:34:15.655 )") 00:34:15.655 [2024-11-20 17:17:07.507021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.655 [2024-11-20 17:17:07.507052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:15.655 [2024-11-20 17:17:07.514985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.655 [2024-11-20 17:17:07.514993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:15.655 17:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:15.655 "params": { 00:34:15.655 "name": "Nvme1", 00:34:15.655 "trtype": "tcp", 00:34:15.655 "traddr": "10.0.0.2", 00:34:15.655 "adrfam": "ipv4", 00:34:15.655 "trsvcid": "4420", 00:34:15.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.655 "hdgst": false, 00:34:15.655 "ddgst": false 00:34:15.655 }, 00:34:15.655 "method": "bdev_nvme_attach_controller" 00:34:15.655 }' 00:34:15.655 [2024-11-20 17:17:07.522983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.655 [2024-11-20 17:17:07.522991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.655 [2024-11-20 17:17:07.534982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.655 [2024-11-20 17:17:07.534988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.655 [2024-11-20 17:17:07.546981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.655 [2024-11-20 17:17:07.546988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.655 [2024-11-20 17:17:07.552245] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:34:15.655 [2024-11-20 17:17:07.552302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223830 ] 00:34:15.655 [2024-11-20 17:17:07.558983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.655 [2024-11-20 17:17:07.558991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.570981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.570988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.582981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.582989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.594983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.594990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.606981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.606989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.618982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.618988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.630981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.630988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.635015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.656 [2024-11-20 17:17:07.642989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.643002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.654982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.654991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.664060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.656 [2024-11-20 17:17:07.666982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.666990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.678987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.679002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.690986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.690998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.702983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.702994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.714983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.714993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.726982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.726990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.739009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.739026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.750985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.750995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.762983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.762993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.774982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.774989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.786981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.786989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.798982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.798990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.810984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.810994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.656 [2024-11-20 17:17:07.822982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.656 [2024-11-20 17:17:07.822990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.916 [2024-11-20 17:17:07.834981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.916 [2024-11-20 17:17:07.834990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.916 [2024-11-20 17:17:07.846982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.916 [2024-11-20 17:17:07.846989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.916 [2024-11-20 17:17:07.858983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.916 [2024-11-20 17:17:07.858993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.916 [2024-11-20 17:17:07.870981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.916 [2024-11-20 17:17:07.870989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.916 [2024-11-20 17:17:07.882981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.882989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.894981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.894989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.906983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.906996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.918982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.918990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.930981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.930989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.942982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.942990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.954990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.955006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 Running I/O for 5 seconds... 00:34:15.917 [2024-11-20 17:17:07.972423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.972439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.986134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.986150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:07.999616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:07.999631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:08.014233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:08.014249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:08.026977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:08.026992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:08.040051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:08.040066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:08.054500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:08.054515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:08.067367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:08.067382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.917 [2024-11-20 17:17:08.082191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.917 [2024-11-20 17:17:08.082206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.095126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.095141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.108032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.108046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.122098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.122113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.135300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.135314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.150086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.150101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.163166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.163189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.176136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.176151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.190580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.190596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.203531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.203546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.218836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.218851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.232507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.232522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.246139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.246154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.259577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.259592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.274397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.274412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.287532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.287546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.302072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.302087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.315237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.315251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.328024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.328038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.177 [2024-11-20 17:17:08.341953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.177 [2024-11-20 17:17:08.341968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.355263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.355278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.370287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.370302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.383437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.383452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.397835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.397850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.411024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.411039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.423683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.423701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.437728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.437743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.450263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.450277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.463757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.463771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.478168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.478182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.491064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.491078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.503964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.503978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.517927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.517942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.531142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.531156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.543943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.543957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.558013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.558027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.570934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.570948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.583704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.583718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.437 [2024-11-20 17:17:08.597889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.437 [2024-11-20 17:17:08.597904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.698 [2024-11-20 17:17:08.610927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.698 [2024-11-20 17:17:08.610943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.698 [2024-11-20 17:17:08.623776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.698 [2024-11-20 17:17:08.623790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.698 [2024-11-20 17:17:08.638451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.698 [2024-11-20 17:17:08.638466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.698 [2024-11-20 17:17:08.651691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.698 [2024-11-20 17:17:08.651704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.666452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.666466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.679462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.679475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.694101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.694115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.707138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.707153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.720145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.720163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.734445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.734460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.747363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.747378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.762271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.762285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.775520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.775534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.790342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.790356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.803323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.803337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.818219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.818233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.831314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.831328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.845836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.845850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.858782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.858796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.699 [2024-11-20 17:17:08.871492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.699 [2024-11-20 17:17:08.871506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.886809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.886824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.900190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.900204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.914277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.914291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.927771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.927785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.942360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.942374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.955536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.955549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 18993.00 IOPS, 148.38 MiB/s [2024-11-20T16:17:09.135Z] [2024-11-20 17:17:08.969520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.969534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.982552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.982566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:08.995610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:08.995624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.010392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.010406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.023623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.023637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.037885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.037899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.051291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.051305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.066348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.066362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.079407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.079420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.094175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.094190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.106941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.106955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.959 [2024-11-20 17:17:09.120385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.959 [2024-11-20 17:17:09.120399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.134503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.134518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.147618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.147632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.162189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.162203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.175279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.175292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.190074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.190093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.203430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.203444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.217996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.218010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.230949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.230963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.243526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.243540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.258210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.258225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.271134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.271148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.283622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.283636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.298528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.298542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.311769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.311783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.325682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.325696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.338830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.338844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.352266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.352280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.366851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.366866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.379849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.379863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.223 [2024-11-20 17:17:09.394228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.223 [2024-11-20 17:17:09.394242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.407358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.407373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.421866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.421881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.435320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.435334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.450043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.450062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.463222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.463237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.476117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.476132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.489873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.489888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.502725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.502740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.516211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.516226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.530235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.530250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.543280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.543294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.557996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.558011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.570844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.570859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.583643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.583657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.598390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.598406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.611050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.611065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.623558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.623572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.638122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.638137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.487 [2024-11-20 17:17:09.651189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.487 [2024-11-20 17:17:09.651204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.662997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.663012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.676081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.676096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.689680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.689695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.702725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.702745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.715450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.715465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.730319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.730334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.743748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.743762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.757907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.757923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.770897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.770912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.783635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.783650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.798105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.798119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.811278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.811292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.826203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.826218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.839477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.839492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.853684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.853699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.866753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.866767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.879949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.879963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.894131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.894145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.748 [2024-11-20 17:17:09.907525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.748 [2024-11-20 17:17:09.907539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:09.922085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:09.922101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:09.935306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:09.935320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:09.950398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:09.950413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:09.963618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:09.963637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 19027.50 IOPS, 148.65 MiB/s [2024-11-20T16:17:10.185Z] [2024-11-20 17:17:09.978026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:09.978042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:09.991121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:09.991136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.004928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.004945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.018294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.018309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.031578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.031593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.046166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.046182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.059178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.059192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.072021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.072036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.086066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.086081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.099270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.099284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.114063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.114077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.127470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.127484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.142050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.142064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.155034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.155048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.167776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.167791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.009 [2024-11-20 17:17:10.182596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.009 [2024-11-20 17:17:10.182611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.270 [2024-11-20 17:17:10.195840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.270 [2024-11-20 17:17:10.195855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.270 [2024-11-20 17:17:10.210089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.270 [2024-11-20 17:17:10.210103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.270 [2024-11-20 17:17:10.223384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.270 [2024-11-20 17:17:10.223398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.270 [2024-11-20 17:17:10.238434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.270 [2024-11-20 17:17:10.238448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.251148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.251166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.264463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.264477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.278675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.278689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.291204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.291218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.304357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.304371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.318127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.318142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.331134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.331149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.344032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.344046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.358438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.358451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.371453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.371467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.386712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.386726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.399767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.399781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.414205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.414219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.427269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.427282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.271 [2024-11-20 17:17:10.442737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.271 [2024-11-20 17:17:10.442752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.455968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.455982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.470451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.470465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.483694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.483709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.498140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.498155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.511220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.511234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.524144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.524165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.538382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.538397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.551481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.551494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.565918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.565931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.579002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.579016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.591809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.591823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.606373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.606387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.619318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.619332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.634225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.634240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.646969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.646983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.659837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.659851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.673921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.673935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.687310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.687324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.532 [2024-11-20 17:17:10.702370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.532 [2024-11-20 17:17:10.702384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.715296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.715310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.729918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.729933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.743436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.743449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.758242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.758256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.771795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.771809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.786172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.786186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.799207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.799222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.811599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.811613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.826162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.826177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.839139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.839153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.851884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.851898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.866469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.866484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.879497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.879511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.894230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.894244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.907557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.907571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.922286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.922301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.935297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.935311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.950024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.950038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.794 [2024-11-20 17:17:10.962981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.794 [2024-11-20 17:17:10.962995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.055 19023.00 IOPS, 148.62 MiB/s [2024-11-20T16:17:11.231Z] [2024-11-20 17:17:10.975997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.055 [2024-11-20 17:17:10.976011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:10.989805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:10.989824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.002800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.002814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.015683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.015697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.030799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.030814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.043538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.043552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.058162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.058177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.071043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.071057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.083731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.083744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.098097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.098111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.111226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.111240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.123929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.123943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.138034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.138048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.150950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.150965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.163798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.163812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.178110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.178124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.190742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.190757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.203478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.203492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.056 [2024-11-20 17:17:11.217957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.056 [2024-11-20 17:17:11.217972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.230943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.230959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.244149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.244173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.258421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.258436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.271504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.271519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.286565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.286580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.299512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.299526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.314140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.314155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.326963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.326977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.339768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.339783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.317 [2024-11-20 17:17:11.354145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.317 [2024-11-20 17:17:11.354164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.367417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.367431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.382185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.382200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.395549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.395563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.410229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.410244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.423438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.423452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.438534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.438549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.451645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.451659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.466649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.466663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.318 [2024-11-20 17:17:11.479811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.318 [2024-11-20 17:17:11.479825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.494707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.494722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.507850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.507868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.521981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.521996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.535188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.535203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.547965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.547979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.562394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.562409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.575263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.575277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.590129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.590144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.603387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.603402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.618110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.618125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.631047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.631062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.643928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.643942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.658263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.658278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.671341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.671356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.686087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.686102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.699217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.699231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.712024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.712038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.726156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.726176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.739113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.739128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.579 [2024-11-20 17:17:11.751818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.579 [2024-11-20 17:17:11.751832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.766022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.766037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.778939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.778953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.792340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.792355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.806383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.806398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.819224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.819239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.832109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.832124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.846171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.846186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.859029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.859044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.871559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.871573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.886420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.886434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.899352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.899366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.913494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.913509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.927137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.927151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.939668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.939682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.953882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.841 [2024-11-20 17:17:11.953897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.841 [2024-11-20 17:17:11.966935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.842 [2024-11-20 17:17:11.966949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.842 19040.00 IOPS, 148.75 MiB/s [2024-11-20T16:17:12.018Z] [2024-11-20 17:17:11.979996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.842 [2024-11-20 17:17:11.980010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.842 [2024-11-20 17:17:11.994127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.842 [2024-11-20 17:17:11.994142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.842 [2024-11-20 17:17:12.007299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.842 [2024-11-20 17:17:12.007313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.022064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.022078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.035451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.035464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.050554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.050568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.063956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.063970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.078358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.078372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.091383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.091396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.105690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.105704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.118489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.118503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.131707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.131721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.146017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.146031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.159110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.159124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.172286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.102 [2024-11-20 17:17:12.172300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.102 [2024-11-20 17:17:12.186489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.186503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.103 [2024-11-20 17:17:12.199836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.199849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.103 [2024-11-20 17:17:12.214175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.214190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.103 [2024-11-20 17:17:12.227017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.227031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.103 [2024-11-20 17:17:12.239877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.239891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.103 [2024-11-20 17:17:12.254274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.254288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.103 [2024-11-20 17:17:12.267504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.103 [2024-11-20 17:17:12.267521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.282253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.282267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.295713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.295727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.310111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.310125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.323316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.323330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.338042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.338056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.350960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.350974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.364179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.364192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.378283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.378297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.391345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.391359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.406039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.406053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.418859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.418874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.431526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.431539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.446252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.446266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.459136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.459151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.472186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.472199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.486298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.486312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.499364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.499378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.514393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.514407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.364 [2024-11-20 17:17:12.527377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.364 [2024-11-20 17:17:12.527395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.542102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.542117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.554999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.555013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.567816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.567830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.582543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.582557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.595812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.595826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.610493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.610507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.623433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.623446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.638387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.638401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.651419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.651432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.665870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.665884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.679096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.679110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.691948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.691962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.706484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.706498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.719268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.719283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.734292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.734306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.747492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.747507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.762698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.762712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.775940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.775954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.626 [2024-11-20 17:17:12.789977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.626 [2024-11-20 17:17:12.789995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.887 [2024-11-20 17:17:12.802998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.887 [2024-11-20 17:17:12.803012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.887 [2024-11-20 17:17:12.815993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.887 [2024-11-20 17:17:12.816007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.829977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.829992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.843079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.843093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.856111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.856126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.870124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.870139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.883052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.883066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.895573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.895587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.910291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.910305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.923549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.923563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.938498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.938513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.951301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.951315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.966462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.966476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 19047.40 IOPS, 148.81 MiB/s [2024-11-20T16:17:13.064Z] [2024-11-20 17:17:12.978686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.978701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 00:34:20.888 Latency(us) 00:34:20.888 [2024-11-20T16:17:13.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.888 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:20.888 Nvme1n1 : 5.01 19049.76 148.83 0.00 0.00 6713.42 2880.85 11632.64 00:34:20.888 [2024-11-20T16:17:13.064Z] =================================================================================================================== 00:34:20.888 [2024-11-20T16:17:13.064Z] Total : 19049.76 148.83 0.00 0.00 6713.42 2880.85 11632.64 00:34:20.888 [2024-11-20 17:17:12.986985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.986999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:12.998990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:12.999005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:13.010988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:13.011001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:13.022988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:13.023001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:13.034985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:13.034997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:13.046983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:13.046994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.888 [2024-11-20 17:17:13.058983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.888 [2024-11-20 17:17:13.058992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.149 [2024-11-20 17:17:13.070984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.149 [2024-11-20 17:17:13.070994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.149 [2024-11-20 17:17:13.082983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.149 [2024-11-20 17:17:13.082991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2223830) - No such process 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2223830 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.149 delay0 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.149 17:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:21.149 [2024-11-20 17:17:13.207540] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:29.287 Initializing NVMe Controllers 00:34:29.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:29.287 Initialization complete. Launching workers. 00:34:29.287 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4974 00:34:29.287 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5242, failed to submit 52 00:34:29.287 success 5085, unsuccessful 157, failed 0 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.287 rmmod nvme_tcp 00:34:29.287 rmmod nvme_fabrics 00:34:29.287 rmmod nvme_keyring 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2221800 ']' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2221800 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2221800 ']' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2221800 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221800 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221800' 00:34:29.287 killing process with pid 2221800 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2221800 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2221800 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.287 17:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.226 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.226 00:34:30.226 real 0m34.248s 00:34:30.226 user 0m43.627s 00:34:30.226 sys 0m12.547s 00:34:30.226 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.226 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:30.226 ************************************ 00:34:30.226 END TEST nvmf_zcopy 00:34:30.226 ************************************ 00:34:30.486 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:30.486 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:30.486 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:30.487 ************************************ 00:34:30.487 START TEST nvmf_nmic 00:34:30.487 ************************************ 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:30.487 * Looking for test storage... 00:34:30.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.487 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.747 --rc genhtml_branch_coverage=1 00:34:30.747 --rc genhtml_function_coverage=1 00:34:30.747 --rc genhtml_legend=1 00:34:30.747 --rc geninfo_all_blocks=1 00:34:30.747 --rc geninfo_unexecuted_blocks=1 00:34:30.747 00:34:30.747 ' 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.747 --rc genhtml_branch_coverage=1 00:34:30.747 --rc genhtml_function_coverage=1 00:34:30.747 --rc genhtml_legend=1 00:34:30.747 --rc geninfo_all_blocks=1 00:34:30.747 --rc geninfo_unexecuted_blocks=1 00:34:30.747 00:34:30.747 ' 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.747 --rc genhtml_branch_coverage=1 00:34:30.747 --rc genhtml_function_coverage=1 00:34:30.747 --rc genhtml_legend=1 00:34:30.747 --rc geninfo_all_blocks=1 00:34:30.747 --rc geninfo_unexecuted_blocks=1 00:34:30.747 00:34:30.747 ' 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.747 --rc genhtml_branch_coverage=1 00:34:30.747 --rc genhtml_function_coverage=1 00:34:30.747 --rc genhtml_legend=1 00:34:30.747 --rc geninfo_all_blocks=1 00:34:30.747 --rc geninfo_unexecuted_blocks=1 00:34:30.747 00:34:30.747 ' 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.747 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.748 17:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:38.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:38.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:38.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:38.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:38.885 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:38.886 17:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:38.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:34:38.886 00:34:38.886 --- 10.0.0.2 ping statistics --- 00:34:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.886 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:38.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:34:38.886 00:34:38.886 --- 10.0.0.1 ping statistics --- 00:34:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.886 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2230487 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2230487 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2230487 ']' 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.886 17:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.886 [2024-11-20 17:17:30.258901] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:38.886 [2024-11-20 17:17:30.260049] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:34:38.886 [2024-11-20 17:17:30.260100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.886 [2024-11-20 17:17:30.364428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:38.886 [2024-11-20 17:17:30.419998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.886 [2024-11-20 17:17:30.420050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.886 [2024-11-20 17:17:30.420058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.886 [2024-11-20 17:17:30.420065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.886 [2024-11-20 17:17:30.420074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.886 [2024-11-20 17:17:30.422017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.886 [2024-11-20 17:17:30.422050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:38.886 [2024-11-20 17:17:30.422267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.886 [2024-11-20 17:17:30.422204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:38.886 [2024-11-20 17:17:30.500996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:38.886 [2024-11-20 17:17:30.502234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:38.886 [2024-11-20 17:17:30.502251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:38.886 [2024-11-20 17:17:30.502617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:38.886 [2024-11-20 17:17:30.502647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:39.148 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 [2024-11-20 17:17:31.139279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 Malloc0 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 [2024-11-20 17:17:31.243627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:39.149 test case1: single bdev can't be used in multiple subsystems 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 [2024-11-20 17:17:31.278882] bdev.c:8473:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:39.149 [2024-11-20 17:17:31.278911] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:39.149 [2024-11-20 17:17:31.278920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.149 request: 00:34:39.149 { 00:34:39.149 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:39.149 "namespace": { 00:34:39.149 "bdev_name": "Malloc0", 00:34:39.149 "no_auto_visible": false, 00:34:39.149 "hide_metadata": false 00:34:39.149 }, 00:34:39.149 "method": "nvmf_subsystem_add_ns", 00:34:39.149 "req_id": 1 00:34:39.149 } 00:34:39.149 Got JSON-RPC error response 00:34:39.149 response: 00:34:39.149 { 00:34:39.149 "code": -32602, 00:34:39.149 "message": "Invalid parameters" 00:34:39.149 } 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:39.149 Adding namespace failed - expected result. 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:39.149 test case2: host connect to nvmf target in multiple paths 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:39.149 [2024-11-20 17:17:31.291040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.149 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:39.722 17:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:39.983 17:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:39.983 17:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:39.983 17:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:39.983 17:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:39.983 17:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:42.530 17:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:42.530 [global] 00:34:42.530 thread=1 00:34:42.530 invalidate=1 00:34:42.530 rw=write 00:34:42.530 time_based=1 00:34:42.530 runtime=1 00:34:42.530 ioengine=libaio 00:34:42.530 direct=1 00:34:42.530 bs=4096 00:34:42.530 iodepth=1 00:34:42.530 norandommap=0 00:34:42.530 numjobs=1 00:34:42.530 00:34:42.530 verify_dump=1 00:34:42.530 verify_backlog=512 00:34:42.530 verify_state_save=0 00:34:42.530 do_verify=1 00:34:42.530 verify=crc32c-intel 00:34:42.530 [job0] 00:34:42.530 filename=/dev/nvme0n1 00:34:42.530 Could not set queue depth (nvme0n1) 00:34:42.530 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:42.530 fio-3.35 00:34:42.530 Starting 1 thread 00:34:43.474 00:34:43.474 job0: (groupid=0, jobs=1): err= 0: pid=2231362: Wed Nov 20 17:17:35 2024 00:34:43.474 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:43.474 slat (nsec): min=6832, max=65301, avg=24790.77, stdev=6939.60 00:34:43.474 clat (usec): min=266, max=42013, avg=1216.17, stdev=4772.67 00:34:43.474 lat (usec): min=274, max=42038, avg=1240.96, stdev=4772.57 00:34:43.474 clat percentiles (usec): 00:34:43.474 | 1.00th=[ 355], 5.00th=[ 490], 10.00th=[ 523], 20.00th=[ 578], 00:34:43.474 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 701], 00:34:43.474 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 816], 00:34:43.474 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:43.474 | 99.99th=[42206] 00:34:43.474 write: IOPS=575, BW=2302KiB/s (2357kB/s)(2304KiB/1001msec); 0 zone resets 00:34:43.474 slat (usec): min=9, max=24949, avg=71.98, stdev=1038.42 00:34:43.474 clat (usec): min=154, max=777, avg=540.95, stdev=130.51 00:34:43.474 lat (usec): min=179, max=25615, avg=612.93, stdev=1052.23 00:34:43.474 clat percentiles (usec): 00:34:43.474 | 1.00th=[ 172], 5.00th=[ 269], 10.00th=[ 338], 20.00th=[ 441], 00:34:43.474 | 30.00th=[ 502], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 586], 00:34:43.474 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 701], 00:34:43.474 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 775], 99.95th=[ 775], 00:34:43.474 | 99.99th=[ 775] 00:34:43.474 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:43.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:43.474 lat (usec) : 250=1.47%, 500=17.28%, 750=71.69%, 1000=8.82% 00:34:43.474 lat (msec) : 2=0.09%, 50=0.64% 00:34:43.474 cpu : usr=2.40%, sys=2.10%, ctx=1092, majf=0, minf=1 00:34:43.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.474 issued rwts: total=512,576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:43.474 00:34:43.474 Run status group 0 (all jobs): 00:34:43.474 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:34:43.474 WRITE: bw=2302KiB/s (2357kB/s), 2302KiB/s-2302KiB/s (2357kB/s-2357kB/s), io=2304KiB (2359kB), run=1001-1001msec 00:34:43.474 00:34:43.474 Disk stats (read/write): 00:34:43.474 nvme0n1: ios=474/512, merge=0/0, ticks=1488/274, in_queue=1762, util=98.40% 00:34:43.474 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:43.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:43.734 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:43.735 rmmod nvme_tcp 00:34:43.735 rmmod nvme_fabrics 00:34:43.735 rmmod nvme_keyring 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2230487 ']' 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2230487 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2230487 ']' 00:34:43.735 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2230487 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230487 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230487' 00:34:43.995 killing process with pid 2230487 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2230487 00:34:43.995 17:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2230487 00:34:43.995 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:43.995 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.995 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.995 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.996 17:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:46.544 00:34:46.544 real 0m15.721s 00:34:46.544 user 0m36.275s 00:34:46.544 sys 0m7.336s 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:46.544 ************************************ 00:34:46.544 END TEST nvmf_nmic 00:34:46.544 ************************************ 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:46.544 ************************************ 00:34:46.544 START TEST nvmf_fio_target 00:34:46.544 ************************************ 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:46.544 * Looking for test storage... 00:34:46.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.544 --rc genhtml_branch_coverage=1 00:34:46.544 --rc genhtml_function_coverage=1 00:34:46.544 --rc genhtml_legend=1 00:34:46.544 --rc geninfo_all_blocks=1 00:34:46.544 --rc geninfo_unexecuted_blocks=1 00:34:46.544 00:34:46.544 ' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.544 --rc genhtml_branch_coverage=1 00:34:46.544 --rc genhtml_function_coverage=1 00:34:46.544 --rc genhtml_legend=1 00:34:46.544 --rc geninfo_all_blocks=1 00:34:46.544 --rc geninfo_unexecuted_blocks=1 00:34:46.544 00:34:46.544 ' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.544 --rc genhtml_branch_coverage=1 00:34:46.544 --rc genhtml_function_coverage=1 00:34:46.544 --rc genhtml_legend=1 00:34:46.544 --rc geninfo_all_blocks=1 00:34:46.544 --rc geninfo_unexecuted_blocks=1 00:34:46.544 00:34:46.544 ' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.544 --rc genhtml_branch_coverage=1 00:34:46.544 --rc genhtml_function_coverage=1 00:34:46.544 --rc genhtml_legend=1 00:34:46.544 --rc geninfo_all_blocks=1 00:34:46.544 --rc geninfo_unexecuted_blocks=1 00:34:46.544 00:34:46.544 ' 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:46.544 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:46.545 17:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.684 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:54.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:54.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:54.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:54.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:54.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:34:54.685 00:34:54.685 --- 10.0.0.2 ping statistics --- 00:34:54.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.685 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:54.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:34:54.685 00:34:54.685 --- 10.0.0.1 ping statistics --- 00:34:54.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.685 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:54.685 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2235756 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2235756 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2235756 ']' 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.686 17:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.686 [2024-11-20 17:17:46.046033] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:54.686 [2024-11-20 17:17:46.047173] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:34:54.686 [2024-11-20 17:17:46.047228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.686 [2024-11-20 17:17:46.147661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:54.686 [2024-11-20 17:17:46.201417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.686 [2024-11-20 17:17:46.201469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.686 [2024-11-20 17:17:46.201478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.686 [2024-11-20 17:17:46.201485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.686 [2024-11-20 17:17:46.201491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.686 [2024-11-20 17:17:46.203801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.686 [2024-11-20 17:17:46.203962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:54.686 [2024-11-20 17:17:46.204122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.686 [2024-11-20 17:17:46.204122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:54.686 [2024-11-20 17:17:46.282800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:54.686 [2024-11-20 17:17:46.283957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:54.686 [2024-11-20 17:17:46.283995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:54.686 [2024-11-20 17:17:46.284408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:54.686 [2024-11-20 17:17:46.284449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.946 17:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:54.946 [2024-11-20 17:17:47.069014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.213 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.213 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:55.213 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.519 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:55.519 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.922 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:55.922 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.922 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:55.922 17:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:56.182 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:56.182 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:56.182 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:56.442 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:56.442 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:56.703 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:56.704 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:56.964 17:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:56.964 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:56.964 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:57.229 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:57.229 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:57.493 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.493 [2024-11-20 17:17:49.628973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.493 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:57.754 17:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:58.014 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:58.588 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:58.588 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:58.588 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:58.588 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:58.588 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:58.588 17:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:00.498 17:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:00.498 [global] 00:35:00.498 thread=1 00:35:00.498 invalidate=1 00:35:00.498 rw=write 00:35:00.498 time_based=1 00:35:00.498 runtime=1 00:35:00.498 ioengine=libaio 00:35:00.498 direct=1 00:35:00.498 bs=4096 00:35:00.498 iodepth=1 00:35:00.498 norandommap=0 00:35:00.498 numjobs=1 00:35:00.498 00:35:00.498 verify_dump=1 00:35:00.498 verify_backlog=512 00:35:00.498 verify_state_save=0 00:35:00.498 do_verify=1 00:35:00.498 verify=crc32c-intel 00:35:00.498 [job0] 00:35:00.498 filename=/dev/nvme0n1 00:35:00.498 [job1] 00:35:00.498 filename=/dev/nvme0n2 00:35:00.498 [job2] 00:35:00.498 filename=/dev/nvme0n3 00:35:00.498 [job3] 00:35:00.498 filename=/dev/nvme0n4 00:35:00.498 Could not set queue depth (nvme0n1) 00:35:00.498 Could not set queue depth (nvme0n2) 00:35:00.498 Could not set queue depth (nvme0n3) 00:35:00.498 Could not set queue depth (nvme0n4) 00:35:01.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.088 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.088 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.088 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.088 fio-3.35 00:35:01.088 Starting 4 threads 00:35:02.503 00:35:02.503 job0: (groupid=0, jobs=1): err= 0: pid=2237292: Wed Nov 20 17:17:54 2024 00:35:02.503 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:35:02.503 slat (nsec): min=6598, max=60104, avg=21368.68, stdev=8757.77 00:35:02.503 clat (usec): min=163, max=738, avg=541.61, stdev=71.91 00:35:02.503 lat (usec): min=170, max=754, avg=562.98, stdev=73.92 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 334], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 486], 00:35:02.503 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 570], 00:35:02.503 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 619], 95.00th=[ 635], 00:35:02.503 | 99.00th=[ 685], 99.50th=[ 693], 99.90th=[ 734], 99.95th=[ 742], 00:35:02.503 | 99.99th=[ 742] 00:35:02.503 write: IOPS=1111, BW=4448KiB/s (4554kB/s)(4452KiB/1001msec); 0 zone resets 00:35:02.503 slat (nsec): min=9437, max=87279, avg=28540.10, stdev=9909.93 00:35:02.503 clat (usec): min=107, max=570, avg=338.48, stdev=76.64 00:35:02.503 lat (usec): min=117, max=588, avg=367.02, stdev=77.27 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 128], 5.00th=[ 215], 10.00th=[ 247], 20.00th=[ 273], 00:35:02.503 | 30.00th=[ 297], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 367], 00:35:02.503 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 429], 95.00th=[ 453], 00:35:02.503 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 570], 00:35:02.503 | 99.99th=[ 570] 00:35:02.503 bw ( KiB/s): min= 4200, max= 4200, per=38.46%, avg=4200.00, stdev= 0.00, samples=1 00:35:02.503 iops : min= 1050, max= 1050, avg=1050.00, stdev= 0.00, samples=1 00:35:02.503 lat (usec) : 250=5.90%, 500=57.88%, 750=36.22% 00:35:02.503 cpu : usr=2.30%, sys=6.20%, ctx=2137, majf=0, minf=2 00:35:02.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.503 issued rwts: total=1024,1113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.503 job1: (groupid=0, jobs=1): err= 0: pid=2237293: Wed Nov 20 17:17:54 2024 00:35:02.503 read: IOPS=345, BW=1382KiB/s (1415kB/s)(1440KiB/1042msec) 00:35:02.503 slat (nsec): min=7141, max=73030, avg=24931.57, stdev=7633.36 00:35:02.503 clat (usec): min=253, max=42068, avg=2343.22, stdev=8230.59 00:35:02.503 lat (usec): min=260, max=42096, avg=2368.15, stdev=8231.20 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 343], 5.00th=[ 437], 10.00th=[ 519], 20.00th=[ 586], 00:35:02.503 | 30.00th=[ 619], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[ 668], 00:35:02.503 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 766], 00:35:02.503 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:02.503 | 99.99th=[42206] 00:35:02.503 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:35:02.503 slat (nsec): min=9981, max=51521, avg=26828.80, stdev=11973.83 00:35:02.503 clat (usec): min=107, max=555, avg=320.26, stdev=67.94 00:35:02.503 lat (usec): min=118, max=566, avg=347.09, stdev=67.00 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 223], 20.00th=[ 273], 00:35:02.503 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 334], 00:35:02.503 | 70.00th=[ 351], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 437], 00:35:02.503 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 553], 00:35:02.503 | 99.99th=[ 553] 00:35:02.503 bw ( KiB/s): min= 4096, max= 4096, per=37.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.503 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.503 lat (usec) : 250=9.29%, 500=52.41%, 750=35.78%, 1000=0.69% 00:35:02.503 lat (msec) : 2=0.11%, 50=1.72% 00:35:02.503 cpu : usr=1.06%, sys=2.31%, ctx=874, majf=0, minf=1 00:35:02.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.503 issued rwts: total=360,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.503 job2: (groupid=0, jobs=1): err= 0: pid=2237294: Wed Nov 20 17:17:54 2024 00:35:02.503 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:02.503 slat (nsec): min=25413, max=56627, avg=26811.04, stdev=2880.03 00:35:02.503 clat (usec): min=739, max=1469, avg=1005.71, stdev=99.26 00:35:02.503 lat (usec): min=766, max=1495, avg=1032.52, stdev=99.12 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 799], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 922], 00:35:02.503 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1037], 00:35:02.503 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:35:02.503 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1467], 99.95th=[ 1467], 00:35:02.503 | 99.99th=[ 1467] 00:35:02.503 write: IOPS=707, BW=2829KiB/s (2897kB/s)(2832KiB/1001msec); 0 zone resets 00:35:02.503 slat (nsec): min=10440, max=63464, avg=33224.26, stdev=7982.57 00:35:02.503 clat (usec): min=227, max=964, avg=611.56, stdev=119.03 00:35:02.503 lat (usec): min=238, max=999, avg=644.78, stdev=121.81 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 330], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 506], 00:35:02.503 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:35:02.503 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 799], 00:35:02.503 | 99.00th=[ 881], 99.50th=[ 938], 99.90th=[ 963], 99.95th=[ 963], 00:35:02.503 | 99.99th=[ 963] 00:35:02.503 bw ( KiB/s): min= 4096, max= 4096, per=37.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.503 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.503 lat (usec) : 250=0.08%, 500=11.07%, 750=40.25%, 1000=26.56% 00:35:02.503 lat (msec) : 2=22.05% 00:35:02.503 cpu : usr=1.20%, sys=4.50%, ctx=1222, majf=0, minf=1 00:35:02.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.503 issued rwts: total=512,708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.503 job3: (groupid=0, jobs=1): err= 0: pid=2237300: Wed Nov 20 17:17:54 2024 00:35:02.503 read: IOPS=16, BW=67.7KiB/s (69.4kB/s)(68.0KiB/1004msec) 00:35:02.503 slat (nsec): min=26518, max=42770, avg=27730.59, stdev=3879.82 00:35:02.503 clat (usec): min=40960, max=42132, avg=41881.76, stdev=286.41 00:35:02.503 lat (usec): min=40986, max=42159, avg=41909.49, stdev=285.16 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:35:02.503 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:02.503 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:02.503 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:02.503 | 99.99th=[42206] 00:35:02.503 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:35:02.503 slat (nsec): min=6125, max=71009, avg=21994.46, stdev=13670.00 00:35:02.503 clat (usec): min=213, max=980, avg=540.49, stdev=130.19 00:35:02.503 lat (usec): min=225, max=1034, avg=562.49, stdev=134.49 00:35:02.503 clat percentiles (usec): 00:35:02.503 | 1.00th=[ 237], 5.00th=[ 338], 10.00th=[ 371], 20.00th=[ 437], 00:35:02.503 | 30.00th=[ 478], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 570], 00:35:02.504 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 775], 00:35:02.504 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 979], 99.95th=[ 979], 00:35:02.504 | 99.99th=[ 979] 00:35:02.504 bw ( KiB/s): min= 4096, max= 4096, per=37.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.504 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.504 lat (usec) : 250=1.32%, 500=36.48%, 750=53.12%, 1000=5.86% 00:35:02.504 lat (msec) : 50=3.21% 00:35:02.504 cpu : usr=0.30%, sys=1.50%, ctx=533, majf=0, minf=1 00:35:02.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.504 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.504 00:35:02.504 Run status group 0 (all jobs): 00:35:02.504 READ: bw=7344KiB/s (7520kB/s), 67.7KiB/s-4092KiB/s (69.4kB/s-4190kB/s), io=7652KiB (7836kB), run=1001-1042msec 00:35:02.504 WRITE: bw=10.7MiB/s (11.2MB/s), 1965KiB/s-4448KiB/s (2013kB/s-4554kB/s), io=11.1MiB (11.7MB), run=1001-1042msec 00:35:02.504 00:35:02.504 Disk stats (read/write): 00:35:02.504 nvme0n1: ios=756/1024, merge=0/0, ticks=426/332, in_queue=758, util=82.77% 00:35:02.504 nvme0n2: ios=410/512, merge=0/0, ticks=1690/153, in_queue=1843, util=96.71% 00:35:02.504 nvme0n3: ios=446/512, merge=0/0, ticks=1343/299, in_queue=1642, util=95.70% 00:35:02.504 nvme0n4: ios=68/512, merge=0/0, ticks=1035/270, in_queue=1305, util=95.72% 00:35:02.504 17:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:02.504 [global] 00:35:02.504 thread=1 00:35:02.504 invalidate=1 00:35:02.504 rw=randwrite 00:35:02.504 time_based=1 00:35:02.504 runtime=1 00:35:02.504 ioengine=libaio 00:35:02.504 direct=1 00:35:02.504 bs=4096 00:35:02.504 iodepth=1 00:35:02.504 norandommap=0 00:35:02.504 numjobs=1 00:35:02.504 00:35:02.504 verify_dump=1 00:35:02.504 verify_backlog=512 00:35:02.504 verify_state_save=0 00:35:02.504 do_verify=1 00:35:02.504 verify=crc32c-intel 00:35:02.504 [job0] 00:35:02.504 filename=/dev/nvme0n1 00:35:02.504 [job1] 00:35:02.504 filename=/dev/nvme0n2 00:35:02.504 [job2] 00:35:02.504 filename=/dev/nvme0n3 00:35:02.504 [job3] 00:35:02.504 filename=/dev/nvme0n4 00:35:02.504 Could not set queue depth (nvme0n1) 00:35:02.504 Could not set queue depth (nvme0n2) 00:35:02.504 Could not set queue depth (nvme0n3) 00:35:02.504 Could not set queue depth (nvme0n4) 00:35:02.771 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:02.771 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:02.771 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:02.771 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:02.771 fio-3.35 00:35:02.771 Starting 4 threads 00:35:04.178 00:35:04.178 job0: (groupid=0, jobs=1): err= 0: pid=2237820: Wed Nov 20 17:17:55 2024 00:35:04.178 read: IOPS=450, BW=1802KiB/s (1845kB/s)(1872KiB/1039msec) 00:35:04.178 slat (nsec): min=9947, max=45802, avg=25988.99, stdev=2814.09 00:35:04.178 clat (usec): min=514, max=41383, avg=1506.45, stdev=4115.34 00:35:04.178 lat (usec): min=539, max=41409, avg=1532.44, stdev=4115.39 00:35:04.178 clat percentiles (usec): 00:35:04.178 | 1.00th=[ 824], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 988], 00:35:04.178 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1090], 60.00th=[ 1123], 00:35:04.178 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1254], 00:35:04.178 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:04.178 | 99.99th=[41157] 00:35:04.178 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:35:04.178 slat (nsec): min=9167, max=50533, avg=28718.97, stdev=8195.13 00:35:04.178 clat (usec): min=213, max=1047, avg=582.70, stdev=143.20 00:35:04.178 lat (usec): min=223, max=1078, avg=611.42, stdev=145.91 00:35:04.178 clat percentiles (usec): 00:35:04.178 | 1.00th=[ 273], 5.00th=[ 326], 10.00th=[ 396], 20.00th=[ 474], 00:35:04.178 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:35:04.178 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 807], 00:35:04.178 | 99.00th=[ 955], 99.50th=[ 1012], 99.90th=[ 1045], 99.95th=[ 1045], 00:35:04.178 | 99.99th=[ 1045] 00:35:04.178 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:35:04.178 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:04.178 lat (usec) : 250=0.31%, 500=13.98%, 750=32.76%, 1000=17.04% 00:35:04.178 lat (msec) : 2=35.41%, 50=0.51% 00:35:04.178 cpu : usr=1.73%, sys=2.60%, ctx=980, majf=0, minf=1 00:35:04.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.178 issued rwts: total=468,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:04.178 job1: (groupid=0, jobs=1): err= 0: pid=2237821: Wed Nov 20 17:17:55 2024 00:35:04.178 read: IOPS=16, BW=66.8KiB/s (68.4kB/s)(68.0KiB/1018msec) 00:35:04.178 slat (nsec): min=25867, max=26907, avg=26344.76, stdev=287.74 00:35:04.178 clat (usec): min=988, max=42024, avg=39223.24, stdev=9863.43 00:35:04.178 lat (usec): min=1014, max=42051, avg=39249.59, stdev=9863.41 00:35:04.178 clat percentiles (usec): 00:35:04.178 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[40633], 20.00th=[41157], 00:35:04.178 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:04.179 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:04.179 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:04.179 | 99.99th=[42206] 00:35:04.179 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:35:04.179 slat (nsec): min=9763, max=68135, avg=31943.96, stdev=7421.76 00:35:04.179 clat (usec): min=222, max=1101, avg=643.44, stdev=161.80 00:35:04.179 lat (usec): min=235, max=1134, avg=675.39, stdev=163.14 00:35:04.179 clat percentiles (usec): 00:35:04.179 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 506], 00:35:04.179 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 644], 60.00th=[ 685], 00:35:04.179 | 70.00th=[ 725], 80.00th=[ 783], 90.00th=[ 865], 95.00th=[ 914], 00:35:04.179 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1106], 99.95th=[ 1106], 00:35:04.179 | 99.99th=[ 1106] 00:35:04.179 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:35:04.179 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:04.179 lat (usec) : 250=0.57%, 500=17.39%, 750=53.50%, 1000=24.76% 00:35:04.179 lat (msec) : 2=0.76%, 50=3.02% 00:35:04.179 cpu : usr=0.88%, sys=1.57%, ctx=530, majf=0, minf=1 00:35:04.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.179 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:04.179 job2: (groupid=0, jobs=1): err= 0: pid=2237822: Wed Nov 20 17:17:55 2024 00:35:04.179 read: IOPS=30, BW=123KiB/s (126kB/s)(128KiB/1038msec) 00:35:04.179 slat (nsec): min=7554, max=29264, avg=23687.94, stdev=6797.43 00:35:04.179 clat (usec): min=427, max=42553, avg=25777.02, stdev=19996.63 00:35:04.179 lat (usec): min=455, max=42579, avg=25800.70, stdev=20000.13 00:35:04.179 clat percentiles (usec): 00:35:04.179 | 1.00th=[ 429], 5.00th=[ 482], 10.00th=[ 570], 20.00th=[ 644], 00:35:04.179 | 30.00th=[ 725], 40.00th=[25297], 50.00th=[41157], 60.00th=[41681], 00:35:04.179 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:04.179 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:04.179 | 99.99th=[42730] 00:35:04.179 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:35:04.179 slat (usec): min=9, max=115, avg=24.40, stdev=11.80 00:35:04.179 clat (usec): min=120, max=1028, avg=382.71, stdev=101.21 00:35:04.179 lat (usec): min=130, max=1060, avg=407.11, stdev=104.95 00:35:04.179 clat percentiles (usec): 00:35:04.179 | 1.00th=[ 184], 5.00th=[ 215], 10.00th=[ 265], 20.00th=[ 302], 00:35:04.179 | 30.00th=[ 322], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 396], 00:35:04.179 | 70.00th=[ 449], 80.00th=[ 478], 90.00th=[ 510], 95.00th=[ 537], 00:35:04.179 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 1029], 99.95th=[ 1029], 00:35:04.179 | 99.99th=[ 1029] 00:35:04.179 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:35:04.179 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:04.179 lat (usec) : 250=7.72%, 500=74.26%, 750=13.79%, 1000=0.37% 00:35:04.179 lat (msec) : 2=0.18%, 50=3.68% 00:35:04.179 cpu : usr=0.68%, sys=1.16%, ctx=545, majf=0, minf=1 00:35:04.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.179 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:04.179 job3: (groupid=0, jobs=1): err= 0: pid=2237823: Wed Nov 20 17:17:55 2024 00:35:04.179 read: IOPS=505, BW=2024KiB/s (2072kB/s)(2052KiB/1014msec) 00:35:04.179 slat (nsec): min=6751, max=46825, avg=23728.12, stdev=7341.93 00:35:04.179 clat (usec): min=170, max=41393, avg=1228.33, stdev=5019.71 00:35:04.179 lat (usec): min=178, max=41404, avg=1252.06, stdev=5019.83 00:35:04.179 clat percentiles (usec): 00:35:04.179 | 1.00th=[ 269], 5.00th=[ 371], 10.00th=[ 465], 20.00th=[ 529], 00:35:04.179 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[ 635], 00:35:04.179 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 758], 00:35:04.179 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:04.179 | 99.99th=[41157] 00:35:04.179 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:35:04.179 slat (nsec): min=9375, max=66511, avg=26412.70, stdev=10521.55 00:35:04.179 clat (usec): min=107, max=600, avg=325.54, stdev=79.81 00:35:04.179 lat (usec): min=117, max=613, avg=351.95, stdev=82.00 00:35:04.179 clat percentiles (usec): 00:35:04.179 | 1.00th=[ 122], 5.00th=[ 178], 10.00th=[ 237], 20.00th=[ 260], 00:35:04.179 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 334], 60.00th=[ 355], 00:35:04.179 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 449], 00:35:04.179 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 545], 99.95th=[ 603], 00:35:04.179 | 99.99th=[ 603] 00:35:04.179 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=2 00:35:04.179 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:35:04.179 lat (usec) : 250=10.61%, 500=59.53%, 750=28.17%, 1000=1.17% 00:35:04.179 lat (msec) : 50=0.52% 00:35:04.179 cpu : usr=1.28%, sys=4.84%, ctx=1537, majf=0, minf=1 00:35:04.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.179 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:04.179 00:35:04.179 Run status group 0 (all jobs): 00:35:04.179 READ: bw=3965KiB/s (4061kB/s), 66.8KiB/s-2024KiB/s (68.4kB/s-2072kB/s), io=4120KiB (4219kB), run=1014-1039msec 00:35:04.179 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-4039KiB/s (2018kB/s-4136kB/s), io=10.0MiB (10.5MB), run=1014-1039msec 00:35:04.179 00:35:04.179 Disk stats (read/write): 00:35:04.179 nvme0n1: ios=512/512, merge=0/0, ticks=538/279, in_queue=817, util=86.97% 00:35:04.179 nvme0n2: ios=62/512, merge=0/0, ticks=766/317, in_queue=1083, util=96.52% 00:35:04.179 nvme0n3: ios=27/512, merge=0/0, ticks=615/186, in_queue=801, util=88.23% 00:35:04.179 nvme0n4: ios=512/611, merge=0/0, ticks=585/198, in_queue=783, util=89.47% 00:35:04.179 17:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:04.179 [global] 00:35:04.179 thread=1 00:35:04.179 invalidate=1 00:35:04.179 rw=write 00:35:04.179 time_based=1 00:35:04.179 runtime=1 00:35:04.179 ioengine=libaio 00:35:04.179 direct=1 00:35:04.179 bs=4096 00:35:04.179 iodepth=128 00:35:04.179 norandommap=0 00:35:04.179 numjobs=1 00:35:04.179 00:35:04.179 verify_dump=1 00:35:04.179 verify_backlog=512 00:35:04.179 verify_state_save=0 00:35:04.179 do_verify=1 00:35:04.179 verify=crc32c-intel 00:35:04.179 [job0] 00:35:04.179 filename=/dev/nvme0n1 00:35:04.179 [job1] 00:35:04.179 filename=/dev/nvme0n2 00:35:04.179 [job2] 00:35:04.179 filename=/dev/nvme0n3 00:35:04.179 [job3] 00:35:04.179 filename=/dev/nvme0n4 00:35:04.179 Could not set queue depth (nvme0n1) 00:35:04.179 Could not set queue depth (nvme0n2) 00:35:04.179 Could not set queue depth (nvme0n3) 00:35:04.179 Could not set queue depth (nvme0n4) 00:35:04.446 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.446 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.446 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.446 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:04.446 fio-3.35 00:35:04.446 Starting 4 threads 00:35:05.855 00:35:05.855 job0: (groupid=0, jobs=1): err= 0: pid=2238342: Wed Nov 20 17:17:57 2024 00:35:05.855 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:35:05.855 slat (nsec): min=1005, max=13109k, avg=89963.97, stdev=648227.99 00:35:05.855 clat (usec): min=2492, max=44906, avg=11531.26, stdev=6692.22 00:35:05.855 lat (usec): min=2497, max=44914, avg=11621.22, stdev=6746.30 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 4113], 5.00th=[ 5866], 10.00th=[ 6194], 20.00th=[ 7046], 00:35:05.855 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[10683], 00:35:05.855 | 70.00th=[12387], 80.00th=[13566], 90.00th=[18744], 95.00th=[25035], 00:35:05.855 | 99.00th=[39584], 99.50th=[42730], 99.90th=[44303], 99.95th=[44827], 00:35:05.855 | 99.99th=[44827] 00:35:05.855 write: IOPS=5257, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1010msec); 0 zone resets 00:35:05.855 slat (nsec): min=1680, max=10989k, avg=95359.12, stdev=626700.42 00:35:05.855 clat (msec): min=2, max=106, avg=12.97, stdev=14.47 00:35:05.855 lat (msec): min=2, max=106, avg=13.07, stdev=14.54 00:35:05.855 clat percentiles (msec): 00:35:05.855 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:35:05.855 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:35:05.855 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 21], 95.00th=[ 36], 00:35:05.855 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 107], 00:35:05.855 | 99.99th=[ 107] 00:35:05.855 bw ( KiB/s): min=20480, max=20976, per=22.53%, avg=20728.00, stdev=350.72, samples=2 00:35:05.855 iops : min= 5120, max= 5244, avg=5182.00, stdev=87.68, samples=2 00:35:05.855 lat (msec) : 4=1.87%, 10=54.98%, 20=33.76%, 50=7.95%, 100=1.07% 00:35:05.855 lat (msec) : 250=0.37% 00:35:05.855 cpu : usr=3.37%, sys=6.34%, ctx=456, majf=0, minf=1 00:35:05.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:05.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:05.855 issued rwts: total=5120,5310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:05.855 job1: (groupid=0, jobs=1): err= 0: pid=2238343: Wed Nov 20 17:17:57 2024 00:35:05.855 read: IOPS=5422, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1006msec) 00:35:05.855 slat (nsec): min=904, max=9644.7k, avg=80749.06, stdev=580783.18 00:35:05.855 clat (usec): min=1939, max=28390, avg=10704.50, stdev=3569.38 00:35:05.855 lat (usec): min=1947, max=28395, avg=10785.25, stdev=3608.18 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 4621], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 7963], 00:35:05.855 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10814], 00:35:05.855 | 70.00th=[11863], 80.00th=[13435], 90.00th=[15795], 95.00th=[18220], 00:35:05.855 | 99.00th=[20317], 99.50th=[21627], 99.90th=[22938], 99.95th=[28443], 00:35:05.855 | 99.99th=[28443] 00:35:05.855 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:35:05.855 slat (nsec): min=1612, max=15084k, avg=90688.62, stdev=617778.47 00:35:05.855 clat (usec): min=567, max=49342, avg=12136.41, stdev=8414.55 00:35:05.855 lat (usec): min=772, max=50211, avg=12227.10, stdev=8462.85 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 1614], 5.00th=[ 4080], 10.00th=[ 5735], 20.00th=[ 6194], 00:35:05.855 | 30.00th=[ 7242], 40.00th=[ 8586], 50.00th=[10683], 60.00th=[11338], 00:35:05.855 | 70.00th=[12649], 80.00th=[15664], 90.00th=[20579], 95.00th=[31851], 00:35:05.855 | 99.00th=[46400], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:35:05.855 | 99.99th=[49546] 00:35:05.855 bw ( KiB/s): min=20480, max=24576, per=24.49%, avg=22528.00, stdev=2896.31, samples=2 00:35:05.855 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:35:05.855 lat (usec) : 750=0.02%, 1000=0.02% 00:35:05.855 lat (msec) : 2=0.61%, 4=1.97%, 10=46.33%, 20=44.34%, 50=6.71% 00:35:05.855 cpu : usr=4.28%, sys=5.17%, ctx=406, majf=0, minf=2 00:35:05.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:05.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:05.855 issued rwts: total=5455,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:05.855 job2: (groupid=0, jobs=1): err= 0: pid=2238347: Wed Nov 20 17:17:57 2024 00:35:05.855 read: IOPS=6369, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1003msec) 00:35:05.855 slat (nsec): min=943, max=20990k, avg=73894.87, stdev=641026.27 00:35:05.855 clat (usec): min=2055, max=44682, avg=10227.79, stdev=5773.85 00:35:05.855 lat (usec): min=2061, max=44689, avg=10301.69, stdev=5818.05 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 3130], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6652], 00:35:05.855 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 9110], 00:35:05.855 | 70.00th=[11600], 80.00th=[13042], 90.00th=[15926], 95.00th=[20579], 00:35:05.855 | 99.00th=[38536], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:05.855 | 99.99th=[44827] 00:35:05.855 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:35:05.855 slat (nsec): min=1603, max=11587k, avg=66803.60, stdev=536109.43 00:35:05.855 clat (usec): min=436, max=36642, avg=9306.26, stdev=4479.70 00:35:05.855 lat (usec): min=472, max=36645, avg=9373.06, stdev=4504.09 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 2835], 5.00th=[ 4146], 10.00th=[ 4621], 20.00th=[ 5866], 00:35:05.855 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 8356], 60.00th=[ 9241], 00:35:05.855 | 70.00th=[10421], 80.00th=[13173], 90.00th=[14353], 95.00th=[17695], 00:35:05.855 | 99.00th=[25297], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:35:05.855 | 99.99th=[36439] 00:35:05.855 bw ( KiB/s): min=24576, max=28672, per=28.94%, avg=26624.00, stdev=2896.31, samples=2 00:35:05.855 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:35:05.855 lat (usec) : 500=0.01%, 750=0.09% 00:35:05.855 lat (msec) : 2=0.30%, 4=2.34%, 10=63.66%, 20=29.51%, 50=4.09% 00:35:05.855 cpu : usr=4.59%, sys=7.78%, ctx=359, majf=0, minf=1 00:35:05.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:05.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:05.855 issued rwts: total=6389,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:05.855 job3: (groupid=0, jobs=1): err= 0: pid=2238348: Wed Nov 20 17:17:57 2024 00:35:05.855 read: IOPS=5110, BW=20.0MiB/s (20.9MB/s)(20.1MiB/1007msec) 00:35:05.855 slat (nsec): min=944, max=15702k, avg=95198.80, stdev=799261.37 00:35:05.855 clat (usec): min=2562, max=37187, avg=12702.19, stdev=5943.02 00:35:05.855 lat (usec): min=2566, max=37212, avg=12797.39, stdev=5994.18 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 4555], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7373], 00:35:05.855 | 30.00th=[ 7963], 40.00th=[ 9896], 50.00th=[11600], 60.00th=[13698], 00:35:05.855 | 70.00th=[14877], 80.00th=[16909], 90.00th=[20579], 95.00th=[24511], 00:35:05.855 | 99.00th=[32113], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:35:05.855 | 99.99th=[36963] 00:35:05.855 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:35:05.855 slat (nsec): min=1651, max=11070k, avg=76250.15, stdev=519997.06 00:35:05.855 clat (usec): min=704, max=57809, avg=11075.56, stdev=8000.77 00:35:05.855 lat (usec): min=737, max=58646, avg=11151.81, stdev=8048.33 00:35:05.855 clat percentiles (usec): 00:35:05.855 | 1.00th=[ 1254], 5.00th=[ 2474], 10.00th=[ 3818], 20.00th=[ 6456], 00:35:05.855 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[10683], 00:35:05.855 | 70.00th=[12649], 80.00th=[13698], 90.00th=[19530], 95.00th=[26608], 00:35:05.855 | 99.00th=[47449], 99.50th=[54789], 99.90th=[57934], 99.95th=[57934], 00:35:05.855 | 99.99th=[57934] 00:35:05.855 bw ( KiB/s): min=16552, max=27688, per=24.04%, avg=22120.00, stdev=7874.34, samples=2 00:35:05.855 iops : min= 4138, max= 6922, avg=5530.00, stdev=1968.59, samples=2 00:35:05.855 lat (usec) : 750=0.03%, 1000=0.10% 00:35:05.855 lat (msec) : 2=1.32%, 4=4.49%, 10=43.93%, 20=39.58%, 50=10.18% 00:35:05.855 lat (msec) : 100=0.37% 00:35:05.855 cpu : usr=4.57%, sys=4.27%, ctx=513, majf=0, minf=1 00:35:05.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:05.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:05.856 issued rwts: total=5146,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:05.856 00:35:05.856 Run status group 0 (all jobs): 00:35:05.856 READ: bw=85.5MiB/s (89.7MB/s), 19.8MiB/s-24.9MiB/s (20.8MB/s-26.1MB/s), io=86.4MiB (90.6MB), run=1003-1010msec 00:35:05.856 WRITE: bw=89.8MiB/s (94.2MB/s), 20.5MiB/s-25.9MiB/s (21.5MB/s-27.2MB/s), io=90.7MiB (95.1MB), run=1003-1010msec 00:35:05.856 00:35:05.856 Disk stats (read/write): 00:35:05.856 nvme0n1: ios=4149/4263, merge=0/0, ticks=45030/50471, in_queue=95501, util=99.60% 00:35:05.856 nvme0n2: ios=4122/4343, merge=0/0, ticks=40853/47641, in_queue=88494, util=82.63% 00:35:05.856 nvme0n3: ios=5120/5319, merge=0/0, ticks=48263/45956, in_queue=94219, util=86.64% 00:35:05.856 nvme0n4: ios=3584/4086, merge=0/0, ticks=48203/47935, in_queue=96138, util=88.90% 00:35:05.856 17:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:05.856 [global] 00:35:05.856 thread=1 00:35:05.856 invalidate=1 00:35:05.856 rw=randwrite 00:35:05.856 time_based=1 00:35:05.856 runtime=1 00:35:05.856 ioengine=libaio 00:35:05.856 direct=1 00:35:05.856 bs=4096 00:35:05.856 iodepth=128 00:35:05.856 norandommap=0 00:35:05.856 numjobs=1 00:35:05.856 00:35:05.856 verify_dump=1 00:35:05.856 verify_backlog=512 00:35:05.856 verify_state_save=0 00:35:05.856 do_verify=1 00:35:05.856 verify=crc32c-intel 00:35:05.856 [job0] 00:35:05.856 filename=/dev/nvme0n1 00:35:05.856 [job1] 00:35:05.856 filename=/dev/nvme0n2 00:35:05.856 [job2] 00:35:05.856 filename=/dev/nvme0n3 00:35:05.856 [job3] 00:35:05.856 filename=/dev/nvme0n4 00:35:05.856 Could not set queue depth (nvme0n1) 00:35:05.856 Could not set queue depth (nvme0n2) 00:35:05.856 Could not set queue depth (nvme0n3) 00:35:05.856 Could not set queue depth (nvme0n4) 00:35:06.183 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:06.183 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:06.183 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:06.183 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:06.183 fio-3.35 00:35:06.183 Starting 4 threads 00:35:07.595 00:35:07.595 job0: (groupid=0, jobs=1): err= 0: pid=2238866: Wed Nov 20 17:17:59 2024 00:35:07.595 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:35:07.595 slat (nsec): min=947, max=12709k, avg=97534.60, stdev=699240.72 00:35:07.595 clat (usec): min=2821, max=95749, avg=9575.17, stdev=7180.12 00:35:07.595 lat (usec): min=2828, max=95757, avg=9672.70, stdev=7317.38 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 4686], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6783], 00:35:07.595 | 30.00th=[ 6915], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 7898], 00:35:07.595 | 70.00th=[ 8979], 80.00th=[11076], 90.00th=[13566], 95.00th=[18744], 00:35:07.595 | 99.00th=[41157], 99.50th=[57410], 99.90th=[95945], 99.95th=[95945], 00:35:07.595 | 99.99th=[95945] 00:35:07.595 write: IOPS=4880, BW=19.1MiB/s (20.0MB/s)(19.3MiB/1011msec); 0 zone resets 00:35:07.595 slat (nsec): min=1598, max=12646k, avg=104634.61, stdev=628138.08 00:35:07.595 clat (usec): min=1097, max=114932, avg=17065.73, stdev=20369.43 00:35:07.595 lat (usec): min=1109, max=114940, avg=17170.37, stdev=20487.08 00:35:07.595 clat percentiles (msec): 00:35:07.595 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:35:07.595 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 13], 00:35:07.595 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 40], 95.00th=[ 64], 00:35:07.595 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 115], 99.95th=[ 115], 00:35:07.595 | 99.99th=[ 115] 00:35:07.595 bw ( KiB/s): min=13880, max=24576, per=20.44%, avg=19228.00, stdev=7563.21, samples=2 00:35:07.595 iops : min= 3470, max= 6144, avg=4807.00, stdev=1890.80, samples=2 00:35:07.595 lat (msec) : 2=0.09%, 4=1.95%, 10=60.21%, 20=25.55%, 50=8.12% 00:35:07.595 lat (msec) : 100=3.01%, 250=1.07% 00:35:07.595 cpu : usr=2.77%, sys=5.45%, ctx=395, majf=0, minf=1 00:35:07.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:07.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:07.595 issued rwts: total=4608,4934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:07.595 job1: (groupid=0, jobs=1): err= 0: pid=2238867: Wed Nov 20 17:17:59 2024 00:35:07.595 read: IOPS=7811, BW=30.5MiB/s (32.0MB/s)(30.8MiB/1008msec) 00:35:07.595 slat (nsec): min=910, max=7803.1k, avg=62667.88, stdev=488329.24 00:35:07.595 clat (usec): min=3199, max=25796, avg=8284.38, stdev=2793.04 00:35:07.595 lat (usec): min=3204, max=25798, avg=8347.05, stdev=2824.30 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 3949], 5.00th=[ 4621], 10.00th=[ 5604], 20.00th=[ 5997], 00:35:07.595 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8356], 00:35:07.595 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[13435], 00:35:07.595 | 99.00th=[17957], 99.50th=[17957], 99.90th=[25297], 99.95th=[25822], 00:35:07.595 | 99.99th=[25822] 00:35:07.595 write: IOPS=8126, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1008msec); 0 zone resets 00:35:07.595 slat (nsec): min=1555, max=8635.1k, avg=56397.39, stdev=377935.74 00:35:07.595 clat (usec): min=1599, max=25797, avg=7644.08, stdev=4226.98 00:35:07.595 lat (usec): min=1611, max=25800, avg=7700.48, stdev=4252.30 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 2802], 5.00th=[ 3752], 10.00th=[ 4113], 20.00th=[ 4686], 00:35:07.595 | 30.00th=[ 5669], 40.00th=[ 6063], 50.00th=[ 6652], 60.00th=[ 7111], 00:35:07.595 | 70.00th=[ 7701], 80.00th=[ 8717], 90.00th=[12780], 95.00th=[19530], 00:35:07.595 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:35:07.595 | 99.99th=[25822] 00:35:07.595 bw ( KiB/s): min=28672, max=36864, per=34.83%, avg=32768.00, stdev=5792.62, samples=2 00:35:07.595 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:35:07.595 lat (msec) : 2=0.13%, 4=4.29%, 10=79.58%, 20=13.64%, 50=2.35% 00:35:07.595 cpu : usr=5.46%, sys=6.55%, ctx=606, majf=0, minf=1 00:35:07.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:07.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:07.595 issued rwts: total=7874,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:07.595 job2: (groupid=0, jobs=1): err= 0: pid=2238868: Wed Nov 20 17:17:59 2024 00:35:07.595 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:35:07.595 slat (nsec): min=936, max=13393k, avg=70966.67, stdev=615735.86 00:35:07.595 clat (usec): min=1157, max=48659, avg=11052.53, stdev=6954.52 00:35:07.595 lat (usec): min=1167, max=48665, avg=11123.50, stdev=6987.34 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 2147], 5.00th=[ 3523], 10.00th=[ 4178], 20.00th=[ 5407], 00:35:07.595 | 30.00th=[ 6849], 40.00th=[ 7832], 50.00th=[ 9110], 60.00th=[10683], 00:35:07.595 | 70.00th=[13698], 80.00th=[15664], 90.00th=[20055], 95.00th=[23987], 00:35:07.595 | 99.00th=[30278], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:35:07.595 | 99.99th=[48497] 00:35:07.595 write: IOPS=6024, BW=23.5MiB/s (24.7MB/s)(23.7MiB/1007msec); 0 zone resets 00:35:07.595 slat (nsec): min=1551, max=16167k, avg=68852.56, stdev=630009.48 00:35:07.595 clat (usec): min=224, max=76194, avg=10831.68, stdev=7722.02 00:35:07.595 lat (usec): min=457, max=76204, avg=10900.53, stdev=7754.87 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 1221], 5.00th=[ 2737], 10.00th=[ 4752], 20.00th=[ 5669], 00:35:07.595 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 9372], 00:35:07.595 | 70.00th=[12256], 80.00th=[15795], 90.00th=[18482], 95.00th=[22676], 00:35:07.595 | 99.00th=[49021], 99.50th=[63701], 99.90th=[76022], 99.95th=[76022], 00:35:07.595 | 99.99th=[76022] 00:35:07.595 bw ( KiB/s): min=19232, max=28280, per=25.25%, avg=23756.00, stdev=6397.90, samples=2 00:35:07.595 iops : min= 4808, max= 7070, avg=5939.00, stdev=1599.48, samples=2 00:35:07.595 lat (usec) : 250=0.01%, 500=0.05%, 750=0.03%, 1000=0.18% 00:35:07.595 lat (msec) : 2=1.62%, 4=6.44%, 10=50.78%, 20=32.41%, 50=8.06% 00:35:07.595 lat (msec) : 100=0.42% 00:35:07.595 cpu : usr=3.88%, sys=7.16%, ctx=374, majf=0, minf=3 00:35:07.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:07.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:07.596 issued rwts: total=5632,6067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:07.596 job3: (groupid=0, jobs=1): err= 0: pid=2238869: Wed Nov 20 17:17:59 2024 00:35:07.596 read: IOPS=4069, BW=15.9MiB/s (16.7MB/s)(16.1MiB/1012msec) 00:35:07.596 slat (nsec): min=991, max=14354k, avg=79615.57, stdev=642199.43 00:35:07.596 clat (usec): min=1363, max=59376, avg=10662.68, stdev=4788.00 00:35:07.596 lat (usec): min=1372, max=59383, avg=10742.29, stdev=4823.33 00:35:07.596 clat percentiles (usec): 00:35:07.596 | 1.00th=[ 2278], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7504], 00:35:07.596 | 30.00th=[ 7898], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10683], 00:35:07.596 | 70.00th=[11207], 80.00th=[12125], 90.00th=[16188], 95.00th=[20055], 00:35:07.596 | 99.00th=[26346], 99.50th=[26608], 99.90th=[55837], 99.95th=[55837], 00:35:07.596 | 99.99th=[59507] 00:35:07.596 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:35:07.596 slat (nsec): min=1609, max=10924k, avg=128722.46, stdev=690189.59 00:35:07.596 clat (usec): min=527, max=83213, avg=18332.09, stdev=15877.40 00:35:07.596 lat (usec): min=818, max=83221, avg=18460.81, stdev=15981.37 00:35:07.596 clat percentiles (usec): 00:35:07.596 | 1.00th=[ 3949], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6783], 00:35:07.596 | 30.00th=[ 7570], 40.00th=[10814], 50.00th=[12649], 60.00th=[14222], 00:35:07.596 | 70.00th=[18220], 80.00th=[28181], 90.00th=[43779], 95.00th=[51643], 00:35:07.596 | 99.00th=[78119], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:35:07.596 | 99.99th=[83362] 00:35:07.596 bw ( KiB/s): min=17552, max=18472, per=19.15%, avg=18012.00, stdev=650.54, samples=2 00:35:07.596 iops : min= 4388, max= 4618, avg=4503.00, stdev=162.63, samples=2 00:35:07.596 lat (usec) : 750=0.01%, 1000=0.02% 00:35:07.596 lat (msec) : 2=0.29%, 4=0.96%, 10=42.57%, 20=38.78%, 50=14.00% 00:35:07.596 lat (msec) : 100=3.36% 00:35:07.596 cpu : usr=2.57%, sys=5.93%, ctx=406, majf=0, minf=1 00:35:07.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:07.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:07.596 issued rwts: total=4118,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:07.596 00:35:07.596 Run status group 0 (all jobs): 00:35:07.596 READ: bw=85.8MiB/s (90.0MB/s), 15.9MiB/s-30.5MiB/s (16.7MB/s-32.0MB/s), io=86.8MiB (91.1MB), run=1007-1012msec 00:35:07.596 WRITE: bw=91.9MiB/s (96.3MB/s), 17.8MiB/s-31.7MiB/s (18.7MB/s-33.3MB/s), io=93.0MiB (97.5MB), run=1007-1012msec 00:35:07.596 00:35:07.596 Disk stats (read/write): 00:35:07.596 nvme0n1: ios=3634/4015, merge=0/0, ticks=30296/65708, in_queue=96004, util=91.88% 00:35:07.596 nvme0n2: ios=6185/6655, merge=0/0, ticks=50178/51697, in_queue=101875, util=96.33% 00:35:07.596 nvme0n3: ios=4270/5120, merge=0/0, ticks=35457/38563, in_queue=74020, util=88.40% 00:35:07.596 nvme0n4: ios=3710/4096, merge=0/0, ticks=37557/64403, in_queue=101960, util=89.43% 00:35:07.596 17:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:07.596 17:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2239194 00:35:07.596 17:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:07.596 17:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:07.596 [global] 00:35:07.596 thread=1 00:35:07.596 invalidate=1 00:35:07.596 rw=read 00:35:07.596 time_based=1 00:35:07.596 runtime=10 00:35:07.596 ioengine=libaio 00:35:07.596 direct=1 00:35:07.596 bs=4096 00:35:07.596 iodepth=1 00:35:07.596 norandommap=1 00:35:07.596 numjobs=1 00:35:07.596 00:35:07.596 [job0] 00:35:07.596 filename=/dev/nvme0n1 00:35:07.596 [job1] 00:35:07.596 filename=/dev/nvme0n2 00:35:07.596 [job2] 00:35:07.596 filename=/dev/nvme0n3 00:35:07.596 [job3] 00:35:07.596 filename=/dev/nvme0n4 00:35:07.596 Could not set queue depth (nvme0n1) 00:35:07.596 Could not set queue depth (nvme0n2) 00:35:07.596 Could not set queue depth (nvme0n3) 00:35:07.596 Could not set queue depth (nvme0n4) 00:35:07.858 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.858 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.858 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.858 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.858 fio-3.35 00:35:07.858 Starting 4 threads 00:35:10.406 17:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:10.406 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=14004224, buflen=4096 00:35:10.406 fio: pid=2239393, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:10.406 17:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:10.667 17:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.667 17:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:10.667 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15110144, buflen=4096 00:35:10.667 fio: pid=2239392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:10.929 17:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.929 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2400256, buflen=4096 00:35:10.929 fio: pid=2239389, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:10.929 17:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:11.191 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5074944, buflen=4096 00:35:11.191 fio: pid=2239390, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:11.191 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:11.191 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:11.191 00:35:11.191 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2239389: Wed Nov 20 17:18:03 2024 00:35:11.191 read: IOPS=197, BW=788KiB/s (807kB/s)(2344KiB/2976msec) 00:35:11.191 slat (usec): min=6, max=10761, avg=53.85, stdev=483.84 00:35:11.191 clat (usec): min=433, max=42160, avg=4980.40, stdev=11837.78 00:35:11.191 lat (usec): min=459, max=42185, avg=5031.52, stdev=11839.22 00:35:11.191 clat percentiles (usec): 00:35:11.191 | 1.00th=[ 635], 5.00th=[ 889], 10.00th=[ 996], 20.00th=[ 1090], 00:35:11.191 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1237], 00:35:11.191 | 70.00th=[ 1270], 80.00th=[ 1319], 90.00th=[ 1565], 95.00th=[41681], 00:35:11.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:11.191 | 99.99th=[42206] 00:35:11.191 bw ( KiB/s): min= 112, max= 1912, per=6.50%, avg=737.60, stdev=737.62, samples=5 00:35:11.191 iops : min= 28, max= 478, avg=184.40, stdev=184.40, samples=5 00:35:11.191 lat (usec) : 500=0.17%, 750=2.21%, 1000=8.01% 00:35:11.191 lat (msec) : 2=80.07%, 50=9.37% 00:35:11.191 cpu : usr=0.10%, sys=0.71%, ctx=591, majf=0, minf=2 00:35:11.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 issued rwts: total=587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:11.191 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2239390: Wed Nov 20 17:18:03 2024 00:35:11.191 read: IOPS=393, BW=1572KiB/s (1610kB/s)(4956KiB/3153msec) 00:35:11.191 slat (usec): min=6, max=21696, avg=106.82, stdev=1194.20 00:35:11.191 clat (usec): min=471, max=42181, avg=2413.22, stdev=7400.83 00:35:11.191 lat (usec): min=496, max=42207, avg=2520.11, stdev=7481.25 00:35:11.191 clat percentiles (usec): 00:35:11.191 | 1.00th=[ 701], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 947], 00:35:11.191 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:35:11.191 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1287], 00:35:11.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:11.191 | 99.99th=[42206] 00:35:11.191 bw ( KiB/s): min= 240, max= 3664, per=13.48%, avg=1528.00, stdev=1559.63, samples=6 00:35:11.191 iops : min= 60, max= 916, avg=382.00, stdev=389.91, samples=6 00:35:11.191 lat (usec) : 500=0.08%, 750=1.94%, 1000=32.66% 00:35:11.191 lat (msec) : 2=61.77%, 10=0.08%, 50=3.39% 00:35:11.191 cpu : usr=0.44%, sys=1.17%, ctx=1247, majf=0, minf=1 00:35:11.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 issued rwts: total=1240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:11.191 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2239392: Wed Nov 20 17:18:03 2024 00:35:11.191 read: IOPS=1322, BW=5287KiB/s (5414kB/s)(14.4MiB/2791msec) 00:35:11.191 slat (usec): min=6, max=15703, avg=28.02, stdev=258.23 00:35:11.191 clat (usec): min=170, max=42461, avg=717.07, stdev=1369.12 00:35:11.191 lat (usec): min=177, max=42487, avg=745.09, stdev=1393.70 00:35:11.191 clat percentiles (usec): 00:35:11.191 | 1.00th=[ 343], 5.00th=[ 474], 10.00th=[ 519], 20.00th=[ 562], 00:35:11.191 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 709], 00:35:11.191 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 865], 00:35:11.191 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[41681], 99.95th=[42206], 00:35:11.191 | 99.99th=[42206] 00:35:11.191 bw ( KiB/s): min= 5256, max= 6568, per=50.58%, avg=5732.80, stdev=497.24, samples=5 00:35:11.191 iops : min= 1314, max= 1642, avg=1433.20, stdev=124.31, samples=5 00:35:11.191 lat (usec) : 250=0.14%, 500=7.26%, 750=61.60%, 1000=30.62% 00:35:11.191 lat (msec) : 2=0.24%, 50=0.11% 00:35:11.191 cpu : usr=1.04%, sys=3.94%, ctx=3691, majf=0, minf=2 00:35:11.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 issued rwts: total=3690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:11.191 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2239393: Wed Nov 20 17:18:03 2024 00:35:11.191 read: IOPS=1314, BW=5258KiB/s (5384kB/s)(13.4MiB/2601msec) 00:35:11.191 slat (nsec): min=6626, max=75047, avg=23429.46, stdev=7669.46 00:35:11.191 clat (usec): min=179, max=1490, avg=724.05, stdev=240.30 00:35:11.191 lat (usec): min=187, max=1516, avg=747.48, stdev=242.00 00:35:11.191 clat percentiles (usec): 00:35:11.191 | 1.00th=[ 379], 5.00th=[ 445], 10.00th=[ 478], 20.00th=[ 523], 00:35:11.191 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 766], 00:35:11.191 | 70.00th=[ 938], 80.00th=[ 996], 90.00th=[ 1074], 95.00th=[ 1123], 00:35:11.191 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1434], 00:35:11.191 | 99.99th=[ 1483] 00:35:11.191 bw ( KiB/s): min= 3984, max= 7016, per=46.50%, avg=5270.40, stdev=1547.75, samples=5 00:35:11.191 iops : min= 996, max= 1754, avg=1317.60, stdev=386.94, samples=5 00:35:11.191 lat (usec) : 250=0.20%, 500=14.06%, 750=45.44%, 1000=20.56% 00:35:11.191 lat (msec) : 2=19.71% 00:35:11.191 cpu : usr=1.08%, sys=3.81%, ctx=3421, majf=0, minf=2 00:35:11.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.191 issued rwts: total=3420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:11.191 00:35:11.191 Run status group 0 (all jobs): 00:35:11.192 READ: bw=11.1MiB/s (11.6MB/s), 788KiB/s-5287KiB/s (807kB/s-5414kB/s), io=34.9MiB (36.6MB), run=2601-3153msec 00:35:11.192 00:35:11.192 Disk stats (read/write): 00:35:11.192 nvme0n1: ios=557/0, merge=0/0, ticks=2791/0, in_queue=2791, util=94.29% 00:35:11.192 nvme0n2: ios=1215/0, merge=0/0, ticks=2946/0, in_queue=2946, util=93.18% 00:35:11.192 nvme0n3: ios=3675/0, merge=0/0, ticks=2429/0, in_queue=2429, util=95.99% 00:35:11.192 nvme0n4: ios=3420/0, merge=0/0, ticks=2402/0, in_queue=2402, util=96.31% 00:35:11.192 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:11.192 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:11.454 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:11.454 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:11.715 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:11.715 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:11.715 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:11.715 17:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2239194 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:11.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:11.976 nvmf hotplug test: fio failed as expected 00:35:11.976 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:12.237 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:12.237 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:12.237 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:12.237 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.238 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.238 rmmod nvme_tcp 00:35:12.238 rmmod nvme_fabrics 00:35:12.238 rmmod nvme_keyring 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2235756 ']' 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2235756 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2235756 ']' 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2235756 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235756 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235756' 00:35:12.500 killing process with pid 2235756 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2235756 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2235756 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.500 17:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.049 00:35:15.049 real 0m28.444s 00:35:15.049 user 2m23.755s 00:35:15.049 sys 0m12.646s 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.049 ************************************ 00:35:15.049 END TEST nvmf_fio_target 00:35:15.049 ************************************ 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:15.049 ************************************ 00:35:15.049 START TEST nvmf_bdevio 00:35:15.049 ************************************ 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:15.049 * Looking for test storage... 00:35:15.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:15.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.049 --rc genhtml_branch_coverage=1 00:35:15.049 --rc genhtml_function_coverage=1 00:35:15.049 --rc genhtml_legend=1 00:35:15.049 --rc geninfo_all_blocks=1 00:35:15.049 --rc geninfo_unexecuted_blocks=1 00:35:15.049 00:35:15.049 ' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:15.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.049 --rc genhtml_branch_coverage=1 00:35:15.049 --rc genhtml_function_coverage=1 00:35:15.049 --rc genhtml_legend=1 00:35:15.049 --rc geninfo_all_blocks=1 00:35:15.049 --rc geninfo_unexecuted_blocks=1 00:35:15.049 00:35:15.049 ' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:15.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.049 --rc genhtml_branch_coverage=1 00:35:15.049 --rc genhtml_function_coverage=1 00:35:15.049 --rc genhtml_legend=1 00:35:15.049 --rc geninfo_all_blocks=1 00:35:15.049 --rc geninfo_unexecuted_blocks=1 00:35:15.049 00:35:15.049 ' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:15.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.049 --rc genhtml_branch_coverage=1 00:35:15.049 --rc genhtml_function_coverage=1 00:35:15.049 --rc genhtml_legend=1 00:35:15.049 --rc geninfo_all_blocks=1 00:35:15.049 --rc geninfo_unexecuted_blocks=1 00:35:15.049 00:35:15.049 ' 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.049 17:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.049 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:15.050 17:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:23.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:23.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.191 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:23.192 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:23.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:35:23.192 00:35:23.192 --- 10.0.0.2 ping statistics --- 00:35:23.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.192 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:35:23.192 00:35:23.192 --- 10.0.0.1 ping statistics --- 00:35:23.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.192 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2244991 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2244991 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2244991 ']' 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.192 17:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.192 [2024-11-20 17:18:14.430766] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:23.192 [2024-11-20 17:18:14.431744] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:35:23.192 [2024-11-20 17:18:14.431781] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.192 [2024-11-20 17:18:14.526648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:23.192 [2024-11-20 17:18:14.563045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.192 [2024-11-20 17:18:14.563076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.192 [2024-11-20 17:18:14.563084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.192 [2024-11-20 17:18:14.563091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.192 [2024-11-20 17:18:14.563097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.192 [2024-11-20 17:18:14.564825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:23.192 [2024-11-20 17:18:14.564974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:23.192 [2024-11-20 17:18:14.565121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:23.192 [2024-11-20 17:18:14.565121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:23.192 [2024-11-20 17:18:14.621552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:23.192 [2024-11-20 17:18:14.622869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:23.193 [2024-11-20 17:18:14.623015] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:23.193 [2024-11-20 17:18:14.623765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:23.193 [2024-11-20 17:18:14.623820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 [2024-11-20 17:18:15.265863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 Malloc0 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 [2024-11-20 17:18:15.358138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.193 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.454 { 00:35:23.454 "params": { 00:35:23.454 "name": "Nvme$subsystem", 00:35:23.454 "trtype": "$TEST_TRANSPORT", 00:35:23.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.454 "adrfam": "ipv4", 00:35:23.454 "trsvcid": "$NVMF_PORT", 00:35:23.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.454 "hdgst": ${hdgst:-false}, 00:35:23.454 "ddgst": ${ddgst:-false} 00:35:23.454 }, 00:35:23.454 "method": "bdev_nvme_attach_controller" 00:35:23.454 } 00:35:23.454 EOF 00:35:23.454 )") 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:23.454 17:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.454 "params": { 00:35:23.454 "name": "Nvme1", 00:35:23.454 "trtype": "tcp", 00:35:23.454 "traddr": "10.0.0.2", 00:35:23.454 "adrfam": "ipv4", 00:35:23.454 "trsvcid": "4420", 00:35:23.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.454 "hdgst": false, 00:35:23.454 "ddgst": false 00:35:23.454 }, 00:35:23.454 "method": "bdev_nvme_attach_controller" 00:35:23.454 }' 00:35:23.454 [2024-11-20 17:18:15.412609] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:35:23.454 [2024-11-20 17:18:15.412659] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245128 ] 00:35:23.454 [2024-11-20 17:18:15.501125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:23.454 [2024-11-20 17:18:15.550202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.454 [2024-11-20 17:18:15.550309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:23.454 [2024-11-20 17:18:15.550310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.715 I/O targets: 00:35:23.715 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:23.715 00:35:23.715 00:35:23.715 CUnit - A unit testing framework for C - Version 2.1-3 00:35:23.715 http://cunit.sourceforge.net/ 00:35:23.715 00:35:23.715 00:35:23.715 Suite: bdevio tests on: Nvme1n1 00:35:23.976 Test: blockdev write read block ...passed 00:35:23.976 Test: blockdev write zeroes read block ...passed 00:35:23.976 Test: blockdev write zeroes read no split ...passed 00:35:23.976 Test: blockdev write zeroes read split ...passed 00:35:23.976 Test: blockdev write zeroes read split partial ...passed 00:35:23.976 Test: blockdev reset ...[2024-11-20 17:18:16.046173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:23.976 [2024-11-20 17:18:16.046273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17970 (9): Bad file descriptor 00:35:23.976 [2024-11-20 17:18:16.142085] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:23.976 passed 00:35:23.976 Test: blockdev write read 8 blocks ...passed 00:35:23.976 Test: blockdev write read size > 128k ...passed 00:35:23.976 Test: blockdev write read invalid size ...passed 00:35:24.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:24.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:24.237 Test: blockdev write read max offset ...passed 00:35:24.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:24.237 Test: blockdev writev readv 8 blocks ...passed 00:35:24.237 Test: blockdev writev readv 30 x 1block ...passed 00:35:24.237 Test: blockdev writev readv block ...passed 00:35:24.237 Test: blockdev writev readv size > 128k ...passed 00:35:24.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:24.237 Test: blockdev comparev and writev ...[2024-11-20 17:18:16.369677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.369726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.369743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.369753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.370396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.370409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.370423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.370431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.371085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.371097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.371120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.371128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.371753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.371764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:24.237 [2024-11-20 17:18:16.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:24.237 [2024-11-20 17:18:16.371786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:24.497 passed 00:35:24.497 Test: blockdev nvme passthru rw ...passed 00:35:24.497 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:18:16.456090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:24.497 [2024-11-20 17:18:16.456110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:24.497 [2024-11-20 17:18:16.456482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:24.497 [2024-11-20 17:18:16.456494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:24.498 [2024-11-20 17:18:16.456881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:24.498 [2024-11-20 17:18:16.456893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:24.498 [2024-11-20 17:18:16.457251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:24.498 [2024-11-20 17:18:16.457264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:24.498 passed 00:35:24.498 Test: blockdev nvme admin passthru ...passed 00:35:24.498 Test: blockdev copy ...passed 00:35:24.498 00:35:24.498 Run Summary: Type Total Ran Passed Failed Inactive 00:35:24.498 suites 1 1 n/a 0 0 00:35:24.498 tests 23 23 23 0 0 00:35:24.498 asserts 152 152 152 0 n/a 00:35:24.498 00:35:24.498 Elapsed time = 1.293 seconds 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.498 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.498 rmmod nvme_tcp 00:35:24.758 rmmod nvme_fabrics 00:35:24.758 rmmod nvme_keyring 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2244991 ']' 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2244991 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2244991 ']' 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2244991 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2244991 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2244991' 00:35:24.758 killing process with pid 2244991 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2244991 00:35:24.758 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2244991 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.018 17:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.931 17:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.931 00:35:26.931 real 0m12.240s 00:35:26.931 user 0m10.695s 00:35:26.931 sys 0m6.310s 00:35:26.931 17:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.931 17:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:26.931 ************************************ 00:35:26.931 END TEST nvmf_bdevio 00:35:26.931 ************************************ 00:35:26.931 17:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:26.931 00:35:26.931 real 5m1.385s 00:35:26.931 user 10m29.481s 00:35:26.931 sys 2m6.597s 00:35:26.931 17:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.931 17:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:26.931 ************************************ 00:35:26.931 END TEST nvmf_target_core_interrupt_mode 00:35:26.931 ************************************ 00:35:27.207 17:18:19 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:27.207 17:18:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:27.207 17:18:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.207 17:18:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:27.207 ************************************ 00:35:27.207 START TEST nvmf_interrupt 00:35:27.207 ************************************ 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:27.207 * Looking for test storage... 00:35:27.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:27.207 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:27.208 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:27.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.208 --rc genhtml_branch_coverage=1 00:35:27.208 --rc genhtml_function_coverage=1 00:35:27.208 --rc genhtml_legend=1 00:35:27.208 --rc geninfo_all_blocks=1 00:35:27.208 --rc geninfo_unexecuted_blocks=1 00:35:27.208 00:35:27.208 ' 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.209 --rc genhtml_branch_coverage=1 00:35:27.209 --rc genhtml_function_coverage=1 00:35:27.209 --rc genhtml_legend=1 00:35:27.209 --rc geninfo_all_blocks=1 00:35:27.209 --rc geninfo_unexecuted_blocks=1 00:35:27.209 00:35:27.209 ' 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.209 --rc genhtml_branch_coverage=1 00:35:27.209 --rc genhtml_function_coverage=1 00:35:27.209 --rc genhtml_legend=1 00:35:27.209 --rc geninfo_all_blocks=1 00:35:27.209 --rc geninfo_unexecuted_blocks=1 00:35:27.209 00:35:27.209 ' 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:27.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.209 --rc genhtml_branch_coverage=1 00:35:27.209 --rc genhtml_function_coverage=1 00:35:27.209 --rc genhtml_legend=1 00:35:27.209 --rc geninfo_all_blocks=1 00:35:27.209 --rc geninfo_unexecuted_blocks=1 00:35:27.209 00:35:27.209 ' 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:27.209 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:27.210 17:18:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.213 17:18:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.213 17:18:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.213 17:18:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:27.214 17:18:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:27.215 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:27.480 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:27.480 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:27.480 17:18:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:27.480 17:18:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:35.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:35.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:35.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.619 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:35.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:35.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:35.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:35:35.620 00:35:35.620 --- 10.0.0.2 ping statistics --- 00:35:35.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.620 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:35.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:35.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:35:35.620 00:35:35.620 --- 10.0.0.1 ping statistics --- 00:35:35.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.620 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2249641 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2249641 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2249641 ']' 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.620 17:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 [2024-11-20 17:18:26.693505] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:35.620 [2024-11-20 17:18:26.694489] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:35:35.620 [2024-11-20 17:18:26.694526] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.620 [2024-11-20 17:18:26.789556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:35.620 [2024-11-20 17:18:26.825185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.620 [2024-11-20 17:18:26.825214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.620 [2024-11-20 17:18:26.825222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.620 [2024-11-20 17:18:26.825229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.620 [2024-11-20 17:18:26.825234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.620 [2024-11-20 17:18:26.826432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.620 [2024-11-20 17:18:26.826520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.620 [2024-11-20 17:18:26.882490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:35.620 [2024-11-20 17:18:26.883035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:35.620 [2024-11-20 17:18:26.883379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:35.620 5000+0 records in 00:35:35.620 5000+0 records out 00:35:35.620 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0192303 s, 532 MB/s 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 AIO0 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 [2024-11-20 17:18:27.599377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.620 [2024-11-20 17:18:27.643952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.620 17:18:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2249641 0 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2249641 0 idle 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:35.621 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249641 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.27 reactor_0' 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249641 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.27 reactor_0 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:35.882 17:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2249641 1 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2249641 1 idle 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:35.883 17:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249684 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249684 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2249833 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2249641 0 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2249641 0 busy 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:35.883 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249641 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.28 reactor_0' 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249641 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.28 reactor_0 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:36.144 17:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:37.086 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:37.086 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:37.086 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:37.086 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249641 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.48 reactor_0' 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249641 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.48 reactor_0 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2249641 1 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2249641 1 busy 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:37.347 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249684 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.29 reactor_1' 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249684 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.29 reactor_1 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:37.609 17:18:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2249833 00:35:47.611 Initializing NVMe Controllers 00:35:47.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:47.611 Controller IO queue size 256, less than required. 00:35:47.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:47.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:47.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:47.611 Initialization complete. Launching workers. 00:35:47.611 ======================================================== 00:35:47.611 Latency(us) 00:35:47.611 Device Information : IOPS MiB/s Average min max 00:35:47.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19213.20 75.05 13329.04 3858.37 31865.10 00:35:47.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19822.20 77.43 12916.17 8132.04 28113.84 00:35:47.611 ======================================================== 00:35:47.611 Total : 39035.40 152.48 13119.38 3858.37 31865.10 00:35:47.611 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2249641 0 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2249641 0 idle 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:47.611 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249641 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0' 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249641 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2249641 1 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2249641 1 idle 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249684 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249684 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:47.612 17:18:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:47.612 17:18:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:47.612 17:18:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:47.612 17:18:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:47.612 17:18:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:47.612 17:18:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2249641 0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2249641 0 idle 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249641 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.62 reactor_0' 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249641 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.62 reactor_0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2249641 1 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2249641 1 idle 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2249641 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2249641 -w 256 00:35:49.525 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2249684 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2249684 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:49.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:49.785 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:50.045 17:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:50.045 rmmod nvme_tcp 00:35:50.045 rmmod nvme_fabrics 00:35:50.045 rmmod nvme_keyring 00:35:50.045 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:50.045 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2249641 ']' 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2249641 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2249641 ']' 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2249641 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2249641 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2249641' 00:35:50.046 killing process with pid 2249641 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2249641 00:35:50.046 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2249641 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:50.306 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:50.307 17:18:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.307 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:50.307 17:18:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.218 17:18:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:52.218 00:35:52.218 real 0m25.173s 00:35:52.218 user 0m40.319s 00:35:52.218 sys 0m9.560s 00:35:52.218 17:18:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.218 17:18:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 ************************************ 00:35:52.218 END TEST nvmf_interrupt 00:35:52.218 ************************************ 00:35:52.218 00:35:52.218 real 30m14.882s 00:35:52.218 user 62m2.025s 00:35:52.218 sys 10m21.507s 00:35:52.218 17:18:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.218 17:18:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 ************************************ 00:35:52.218 END TEST nvmf_tcp 00:35:52.218 ************************************ 00:35:52.479 17:18:44 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:52.479 17:18:44 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:52.479 17:18:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:52.479 17:18:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.479 17:18:44 -- common/autotest_common.sh@10 -- # set +x 00:35:52.479 ************************************ 00:35:52.479 START TEST spdkcli_nvmf_tcp 00:35:52.479 ************************************ 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:52.479 * Looking for test storage... 00:35:52.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.479 --rc genhtml_branch_coverage=1 00:35:52.479 --rc genhtml_function_coverage=1 00:35:52.479 --rc genhtml_legend=1 00:35:52.479 --rc geninfo_all_blocks=1 00:35:52.479 --rc geninfo_unexecuted_blocks=1 00:35:52.479 00:35:52.479 ' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.479 --rc genhtml_branch_coverage=1 00:35:52.479 --rc genhtml_function_coverage=1 00:35:52.479 --rc genhtml_legend=1 00:35:52.479 --rc geninfo_all_blocks=1 00:35:52.479 --rc geninfo_unexecuted_blocks=1 00:35:52.479 00:35:52.479 ' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.479 --rc genhtml_branch_coverage=1 00:35:52.479 --rc genhtml_function_coverage=1 00:35:52.479 --rc genhtml_legend=1 00:35:52.479 --rc geninfo_all_blocks=1 00:35:52.479 --rc geninfo_unexecuted_blocks=1 00:35:52.479 00:35:52.479 ' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.479 --rc genhtml_branch_coverage=1 00:35:52.479 --rc genhtml_function_coverage=1 00:35:52.479 --rc genhtml_legend=1 00:35:52.479 --rc geninfo_all_blocks=1 00:35:52.479 --rc geninfo_unexecuted_blocks=1 00:35:52.479 00:35:52.479 ' 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:52.479 17:18:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:52.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2253209 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2253209 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2253209 ']' 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.741 17:18:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.741 [2024-11-20 17:18:44.761996] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:35:52.741 [2024-11-20 17:18:44.762072] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253209 ] 00:35:52.741 [2024-11-20 17:18:44.853470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:52.741 [2024-11-20 17:18:44.907480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.741 [2024-11-20 17:18:44.907609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.683 17:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:53.683 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:53.683 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:53.683 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:53.683 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:53.683 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:53.683 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:53.683 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:53.683 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:53.683 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:53.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:53.683 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:53.683 ' 00:35:56.226 [2024-11-20 17:18:48.370868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.624 [2024-11-20 17:18:49.735048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:00.345 [2024-11-20 17:18:52.258239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:02.888 [2024-11-20 17:18:54.484465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:04.274 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:04.274 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:04.274 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:04.274 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:04.274 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:04.274 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:04.274 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:04.274 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:04.274 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:04.274 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:04.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:04.274 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:04.274 17:18:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:04.535 17:18:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.796 17:18:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:04.796 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:04.796 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:04.796 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:04.796 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:04.796 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:04.796 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:04.796 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:04.796 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:04.796 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:04.796 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:04.796 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:04.796 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:04.796 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:04.796 ' 00:36:11.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:11.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:11.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:11.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:11.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:11.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:11.379 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:11.379 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:11.379 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:11.379 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:11.379 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:11.379 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:11.379 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:11.379 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2253209 ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2253209' 00:36:11.379 killing process with pid 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2253209 ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2253209 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2253209 ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2253209 00:36:11.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2253209) - No such process 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2253209 is not found' 00:36:11.379 Process with pid 2253209 is not found 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:11.379 00:36:11.379 real 0m18.232s 00:36:11.379 user 0m40.505s 00:36:11.379 sys 0m0.900s 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.379 17:19:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.379 ************************************ 00:36:11.379 END TEST spdkcli_nvmf_tcp 00:36:11.379 ************************************ 00:36:11.379 17:19:02 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:11.379 17:19:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:11.379 17:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.379 17:19:02 -- common/autotest_common.sh@10 -- # set +x 00:36:11.379 ************************************ 00:36:11.379 START TEST nvmf_identify_passthru 00:36:11.379 ************************************ 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:11.379 * Looking for test storage... 00:36:11.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.379 17:19:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.379 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:11.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.380 --rc genhtml_branch_coverage=1 00:36:11.380 --rc genhtml_function_coverage=1 00:36:11.380 --rc genhtml_legend=1 00:36:11.380 --rc geninfo_all_blocks=1 00:36:11.380 --rc geninfo_unexecuted_blocks=1 00:36:11.380 00:36:11.380 ' 00:36:11.380 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:11.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.380 --rc genhtml_branch_coverage=1 00:36:11.380 --rc genhtml_function_coverage=1 00:36:11.380 --rc genhtml_legend=1 00:36:11.380 --rc geninfo_all_blocks=1 00:36:11.380 --rc geninfo_unexecuted_blocks=1 00:36:11.380 00:36:11.380 ' 00:36:11.380 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:11.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.380 --rc genhtml_branch_coverage=1 00:36:11.380 --rc genhtml_function_coverage=1 00:36:11.380 --rc genhtml_legend=1 00:36:11.380 --rc geninfo_all_blocks=1 00:36:11.380 --rc geninfo_unexecuted_blocks=1 00:36:11.380 00:36:11.380 ' 00:36:11.380 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:11.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.380 --rc genhtml_branch_coverage=1 00:36:11.380 --rc genhtml_function_coverage=1 00:36:11.380 --rc genhtml_legend=1 00:36:11.380 --rc geninfo_all_blocks=1 00:36:11.380 --rc geninfo_unexecuted_blocks=1 00:36:11.380 00:36:11.380 ' 00:36:11.380 17:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.380 17:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.380 17:19:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:11.380 17:19:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.380 17:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:11.380 17:19:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.380 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:11.380 17:19:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.380 17:19:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:11.380 17:19:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:11.380 17:19:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.380 17:19:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:19.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:19.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:19.522 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:19.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:19.522 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:36:19.523 00:36:19.523 --- 10.0.0.2 ping statistics --- 00:36:19.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.523 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:36:19.523 00:36:19.523 --- 10.0.0.1 ping statistics --- 00:36:19.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.523 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.523 17:19:10 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:19.523 17:19:10 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:19.523 17:19:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:19.523 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.523 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.523 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:19.523 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.523 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.784 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2260535 00:36:19.784 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:19.784 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:19.784 17:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2260535 00:36:19.784 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2260535 ']' 00:36:19.784 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.784 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.784 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.784 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.784 17:19:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.784 [2024-11-20 17:19:11.757988] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:36:19.784 [2024-11-20 17:19:11.758060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.784 [2024-11-20 17:19:11.857792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:19.784 [2024-11-20 17:19:11.912454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.784 [2024-11-20 17:19:11.912510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.784 [2024-11-20 17:19:11.912519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.784 [2024-11-20 17:19:11.912526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.784 [2024-11-20 17:19:11.912533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.784 [2024-11-20 17:19:11.914565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.784 [2024-11-20 17:19:11.914724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.784 [2024-11-20 17:19:11.914889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.784 [2024-11-20 17:19:11.914889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:20.728 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.728 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:20.728 17:19:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:20.728 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.728 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.728 INFO: Log level set to 20 00:36:20.728 INFO: Requests: 00:36:20.728 { 00:36:20.728 "jsonrpc": "2.0", 00:36:20.728 "method": "nvmf_set_config", 00:36:20.728 "id": 1, 00:36:20.728 "params": { 00:36:20.728 "admin_cmd_passthru": { 00:36:20.728 "identify_ctrlr": true 00:36:20.728 } 00:36:20.728 } 00:36:20.728 } 00:36:20.728 00:36:20.728 INFO: response: 00:36:20.729 { 00:36:20.729 "jsonrpc": "2.0", 00:36:20.729 "id": 1, 00:36:20.729 "result": true 00:36:20.729 } 00:36:20.729 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.729 17:19:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.729 INFO: Setting log level to 20 00:36:20.729 INFO: Setting log level to 20 00:36:20.729 INFO: Log level set to 20 00:36:20.729 INFO: Log level set to 20 00:36:20.729 INFO: Requests: 00:36:20.729 { 00:36:20.729 "jsonrpc": "2.0", 00:36:20.729 "method": "framework_start_init", 00:36:20.729 "id": 1 00:36:20.729 } 00:36:20.729 00:36:20.729 INFO: Requests: 00:36:20.729 { 00:36:20.729 "jsonrpc": "2.0", 00:36:20.729 "method": "framework_start_init", 00:36:20.729 "id": 1 00:36:20.729 } 00:36:20.729 00:36:20.729 [2024-11-20 17:19:12.682696] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:20.729 INFO: response: 00:36:20.729 { 00:36:20.729 "jsonrpc": "2.0", 00:36:20.729 "id": 1, 00:36:20.729 "result": true 00:36:20.729 } 00:36:20.729 00:36:20.729 INFO: response: 00:36:20.729 { 00:36:20.729 "jsonrpc": "2.0", 00:36:20.729 "id": 1, 00:36:20.729 "result": true 00:36:20.729 } 00:36:20.729 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.729 17:19:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.729 INFO: Setting log level to 40 00:36:20.729 INFO: Setting log level to 40 00:36:20.729 INFO: Setting log level to 40 00:36:20.729 [2024-11-20 17:19:12.696264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.729 17:19:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.729 17:19:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.729 17:19:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.990 Nvme0n1 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.990 [2024-11-20 17:19:13.097103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.990 [ 00:36:20.990 { 00:36:20.990 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:20.990 "subtype": "Discovery", 00:36:20.990 "listen_addresses": [], 00:36:20.990 "allow_any_host": true, 00:36:20.990 "hosts": [] 00:36:20.990 }, 00:36:20.990 { 00:36:20.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:20.990 "subtype": "NVMe", 00:36:20.990 "listen_addresses": [ 00:36:20.990 { 00:36:20.990 "trtype": "TCP", 00:36:20.990 "adrfam": "IPv4", 00:36:20.990 "traddr": "10.0.0.2", 00:36:20.990 "trsvcid": "4420" 00:36:20.990 } 00:36:20.990 ], 00:36:20.990 "allow_any_host": true, 00:36:20.990 "hosts": [], 00:36:20.990 "serial_number": "SPDK00000000000001", 00:36:20.990 "model_number": "SPDK bdev Controller", 00:36:20.990 "max_namespaces": 1, 00:36:20.990 "min_cntlid": 1, 00:36:20.990 "max_cntlid": 65519, 00:36:20.990 "namespaces": [ 00:36:20.990 { 00:36:20.990 "nsid": 1, 00:36:20.990 "bdev_name": "Nvme0n1", 00:36:20.990 "name": "Nvme0n1", 00:36:20.990 "nguid": "36344730526054870025384500000044", 00:36:20.990 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:20.990 } 00:36:20.990 ] 00:36:20.990 } 00:36:20.990 ] 00:36:20.990 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:20.990 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:21.252 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:21.252 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:21.252 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:21.252 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:21.514 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:21.514 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.514 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:21.514 17:19:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:21.514 rmmod nvme_tcp 00:36:21.514 rmmod nvme_fabrics 00:36:21.514 rmmod nvme_keyring 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2260535 ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2260535 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2260535 ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2260535 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2260535 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2260535' 00:36:21.514 killing process with pid 2260535 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2260535 00:36:21.514 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2260535 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:21.776 17:19:13 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.776 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:21.776 17:19:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.324 17:19:16 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.324 00:36:24.324 real 0m13.270s 00:36:24.324 user 0m10.318s 00:36:24.324 sys 0m6.837s 00:36:24.324 17:19:16 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.324 17:19:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.324 ************************************ 00:36:24.324 END TEST nvmf_identify_passthru 00:36:24.324 ************************************ 00:36:24.324 17:19:16 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:24.324 17:19:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:24.324 17:19:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.324 17:19:16 -- common/autotest_common.sh@10 -- # set +x 00:36:24.324 ************************************ 00:36:24.324 START TEST nvmf_dif 00:36:24.324 ************************************ 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:24.324 * Looking for test storage... 00:36:24.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.324 17:19:16 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.324 --rc genhtml_branch_coverage=1 00:36:24.324 --rc genhtml_function_coverage=1 00:36:24.324 --rc genhtml_legend=1 00:36:24.324 --rc geninfo_all_blocks=1 00:36:24.324 --rc geninfo_unexecuted_blocks=1 00:36:24.324 00:36:24.324 ' 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.324 --rc genhtml_branch_coverage=1 00:36:24.324 --rc genhtml_function_coverage=1 00:36:24.324 --rc genhtml_legend=1 00:36:24.324 --rc geninfo_all_blocks=1 00:36:24.324 --rc geninfo_unexecuted_blocks=1 00:36:24.324 00:36:24.324 ' 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.324 --rc genhtml_branch_coverage=1 00:36:24.324 --rc genhtml_function_coverage=1 00:36:24.324 --rc genhtml_legend=1 00:36:24.324 --rc geninfo_all_blocks=1 00:36:24.324 --rc geninfo_unexecuted_blocks=1 00:36:24.324 00:36:24.324 ' 00:36:24.324 17:19:16 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.324 --rc genhtml_branch_coverage=1 00:36:24.324 --rc genhtml_function_coverage=1 00:36:24.324 --rc genhtml_legend=1 00:36:24.324 --rc geninfo_all_blocks=1 00:36:24.325 --rc geninfo_unexecuted_blocks=1 00:36:24.325 00:36:24.325 ' 00:36:24.325 17:19:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.325 17:19:16 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.325 17:19:16 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.325 17:19:16 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.325 17:19:16 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.325 17:19:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.325 17:19:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.325 17:19:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.325 17:19:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:24.325 17:19:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.325 17:19:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:24.325 17:19:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:24.325 17:19:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:24.325 17:19:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:24.325 17:19:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.325 17:19:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:24.325 17:19:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:24.325 17:19:16 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.325 17:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:32.472 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:32.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:32.472 17:19:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:32.473 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:32.473 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:32.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:36:32.473 00:36:32.473 --- 10.0.0.2 ping statistics --- 00:36:32.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.473 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:32.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:36:32.473 00:36:32.473 --- 10.0.0.1 ping statistics --- 00:36:32.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.473 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:32.473 17:19:23 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.024 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:35.024 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:35.024 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:35.024 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:35.024 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:35.024 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:35.285 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:35.285 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:35.548 17:19:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:35.548 17:19:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2266546 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2266546 00:36:35.548 17:19:27 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2266546 ']' 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:35.548 17:19:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.548 [2024-11-20 17:19:27.718354] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:36:35.548 [2024-11-20 17:19:27.718414] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.809 [2024-11-20 17:19:27.817042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.809 [2024-11-20 17:19:27.867997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.809 [2024-11-20 17:19:27.868047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.809 [2024-11-20 17:19:27.868056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.809 [2024-11-20 17:19:27.868063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.810 [2024-11-20 17:19:27.868069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.810 [2024-11-20 17:19:27.868811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.382 17:19:28 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:36.382 17:19:28 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:36.382 17:19:28 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:36.382 17:19:28 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.382 17:19:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:36.643 17:19:28 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:36.643 17:19:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:36.643 17:19:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:36.643 17:19:28 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.643 17:19:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:36.643 [2024-11-20 17:19:28.579906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:36.643 17:19:28 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.643 17:19:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:36.643 17:19:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:36.643 17:19:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.643 17:19:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:36.643 ************************************ 00:36:36.643 START TEST fio_dif_1_default 00:36:36.643 ************************************ 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.643 bdev_null0 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.643 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.644 [2024-11-20 17:19:28.668356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:36.644 { 00:36:36.644 "params": { 00:36:36.644 "name": "Nvme$subsystem", 00:36:36.644 "trtype": "$TEST_TRANSPORT", 00:36:36.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.644 "adrfam": "ipv4", 00:36:36.644 "trsvcid": "$NVMF_PORT", 00:36:36.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.644 "hdgst": ${hdgst:-false}, 00:36:36.644 "ddgst": ${ddgst:-false} 00:36:36.644 }, 00:36:36.644 "method": "bdev_nvme_attach_controller" 00:36:36.644 } 00:36:36.644 EOF 00:36:36.644 )") 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:36.644 "params": { 00:36:36.644 "name": "Nvme0", 00:36:36.644 "trtype": "tcp", 00:36:36.644 "traddr": "10.0.0.2", 00:36:36.644 "adrfam": "ipv4", 00:36:36.644 "trsvcid": "4420", 00:36:36.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.644 "hdgst": false, 00:36:36.644 "ddgst": false 00:36:36.644 }, 00:36:36.644 "method": "bdev_nvme_attach_controller" 00:36:36.644 }' 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:36.644 17:19:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.904 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:36.904 fio-3.35 00:36:36.904 Starting 1 thread 00:36:49.142 00:36:49.142 filename0: (groupid=0, jobs=1): err= 0: pid=2267073: Wed Nov 20 17:19:39 2024 00:36:49.142 read: IOPS=98, BW=394KiB/s (403kB/s)(3952KiB/10041msec) 00:36:49.142 slat (nsec): min=5466, max=90028, avg=6384.59, stdev=3161.74 00:36:49.142 clat (usec): min=855, max=42436, avg=40631.93, stdev=4417.55 00:36:49.142 lat (usec): min=861, max=42479, avg=40638.32, stdev=4416.76 00:36:49.142 clat percentiles (usec): 00:36:49.142 | 1.00th=[ 988], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:49.142 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:49.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:49.142 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:49.142 | 99.99th=[42206] 00:36:49.142 bw ( KiB/s): min= 352, max= 448, per=99.85%, avg=393.60, stdev=21.02, samples=20 00:36:49.142 iops : min= 88, max= 112, avg=98.40, stdev= 5.26, samples=20 00:36:49.142 lat (usec) : 1000=1.11% 00:36:49.142 lat (msec) : 2=0.10%, 50=98.79% 00:36:49.142 cpu : usr=93.34%, sys=6.39%, ctx=20, majf=0, minf=256 00:36:49.142 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.142 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.142 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:49.142 00:36:49.142 Run status group 0 (all jobs): 00:36:49.142 READ: bw=394KiB/s (403kB/s), 394KiB/s-394KiB/s (403kB/s-403kB/s), io=3952KiB (4047kB), run=10041-10041msec 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 00:36:49.142 real 0m11.217s 00:36:49.142 user 0m19.356s 00:36:49.142 sys 0m1.054s 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 ************************************ 00:36:49.142 END TEST fio_dif_1_default 00:36:49.142 ************************************ 00:36:49.142 17:19:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:49.142 17:19:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:49.142 17:19:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 ************************************ 00:36:49.142 START TEST fio_dif_1_multi_subsystems 00:36:49.142 ************************************ 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 bdev_null0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 [2024-11-20 17:19:39.965423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 bdev_null1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.142 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:49.142 { 00:36:49.142 "params": { 00:36:49.142 "name": "Nvme$subsystem", 00:36:49.142 "trtype": "$TEST_TRANSPORT", 00:36:49.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.142 "adrfam": "ipv4", 00:36:49.142 "trsvcid": "$NVMF_PORT", 00:36:49.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.142 "hdgst": ${hdgst:-false}, 00:36:49.142 "ddgst": ${ddgst:-false} 00:36:49.142 }, 00:36:49.143 "method": "bdev_nvme_attach_controller" 00:36:49.143 } 00:36:49.143 EOF 00:36:49.143 )") 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:49.143 { 00:36:49.143 "params": { 00:36:49.143 "name": "Nvme$subsystem", 00:36:49.143 "trtype": "$TEST_TRANSPORT", 00:36:49.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.143 "adrfam": "ipv4", 00:36:49.143 "trsvcid": "$NVMF_PORT", 00:36:49.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.143 "hdgst": ${hdgst:-false}, 00:36:49.143 "ddgst": ${ddgst:-false} 00:36:49.143 }, 00:36:49.143 "method": "bdev_nvme_attach_controller" 00:36:49.143 } 00:36:49.143 EOF 00:36:49.143 )") 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:49.143 "params": { 00:36:49.143 "name": "Nvme0", 00:36:49.143 "trtype": "tcp", 00:36:49.143 "traddr": "10.0.0.2", 00:36:49.143 "adrfam": "ipv4", 00:36:49.143 "trsvcid": "4420", 00:36:49.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.143 "hdgst": false, 00:36:49.143 "ddgst": false 00:36:49.143 }, 00:36:49.143 "method": "bdev_nvme_attach_controller" 00:36:49.143 },{ 00:36:49.143 "params": { 00:36:49.143 "name": "Nvme1", 00:36:49.143 "trtype": "tcp", 00:36:49.143 "traddr": "10.0.0.2", 00:36:49.143 "adrfam": "ipv4", 00:36:49.143 "trsvcid": "4420", 00:36:49.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:49.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:49.143 "hdgst": false, 00:36:49.143 "ddgst": false 00:36:49.143 }, 00:36:49.143 "method": "bdev_nvme_attach_controller" 00:36:49.143 }' 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:49.143 17:19:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.143 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:49.143 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:49.143 fio-3.35 00:36:49.143 Starting 2 threads 00:36:59.142 00:36:59.142 filename0: (groupid=0, jobs=1): err= 0: pid=2269420: Wed Nov 20 17:19:51 2024 00:36:59.142 read: IOPS=191, BW=765KiB/s (783kB/s)(7648KiB/10001msec) 00:36:59.142 slat (nsec): min=5464, max=51326, avg=6304.69, stdev=2029.35 00:36:59.142 clat (usec): min=533, max=41858, avg=20903.79, stdev=20174.30 00:36:59.142 lat (usec): min=541, max=41883, avg=20910.09, stdev=20174.25 00:36:59.142 clat percentiles (usec): 00:36:59.142 | 1.00th=[ 578], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 816], 00:36:59.142 | 30.00th=[ 824], 40.00th=[ 848], 50.00th=[ 1057], 60.00th=[41157], 00:36:59.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:59.142 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:59.142 | 99.99th=[41681] 00:36:59.142 bw ( KiB/s): min= 736, max= 768, per=49.80%, avg=764.63, stdev=10.09, samples=19 00:36:59.142 iops : min= 184, max= 192, avg=191.16, stdev= 2.52, samples=19 00:36:59.142 lat (usec) : 750=2.93%, 1000=46.91% 00:36:59.142 lat (msec) : 2=0.16%, 4=0.21%, 50=49.79% 00:36:59.142 cpu : usr=95.45%, sys=4.34%, ctx=13, majf=0, minf=206 00:36:59.142 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.143 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.143 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:59.143 filename1: (groupid=0, jobs=1): err= 0: pid=2269421: Wed Nov 20 17:19:51 2024 00:36:59.143 read: IOPS=192, BW=769KiB/s (788kB/s)(7696KiB/10002msec) 00:36:59.143 slat (nsec): min=5463, max=35390, avg=6768.60, stdev=2293.79 00:36:59.143 clat (usec): min=498, max=42075, avg=20773.74, stdev=20301.02 00:36:59.143 lat (usec): min=506, max=42096, avg=20780.51, stdev=20300.85 00:36:59.143 clat percentiles (usec): 00:36:59.143 | 1.00th=[ 519], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:36:59.143 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 824], 60.00th=[41157], 00:36:59.143 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:59.143 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:59.143 | 99.99th=[42206] 00:36:59.143 bw ( KiB/s): min= 704, max= 897, per=50.26%, avg=771.42, stdev=33.76, samples=19 00:36:59.143 iops : min= 176, max= 224, avg=192.84, stdev= 8.39, samples=19 00:36:59.143 lat (usec) : 500=0.05%, 750=48.65%, 1000=1.61% 00:36:59.143 lat (msec) : 2=0.21%, 50=49.48% 00:36:59.143 cpu : usr=95.57%, sys=4.22%, ctx=13, majf=0, minf=67 00:36:59.143 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.143 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.143 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:59.143 00:36:59.143 Run status group 0 (all jobs): 00:36:59.143 READ: bw=1534KiB/s (1571kB/s), 765KiB/s-769KiB/s (783kB/s-788kB/s), io=15.0MiB (15.7MB), run=10001-10002msec 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.143 00:36:59.143 real 0m11.388s 00:36:59.143 user 0m32.837s 00:36:59.143 sys 0m1.236s 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.143 17:19:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.143 ************************************ 00:36:59.143 END TEST fio_dif_1_multi_subsystems 00:36:59.143 ************************************ 00:36:59.404 17:19:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:59.404 17:19:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:59.404 17:19:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.404 17:19:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:59.404 ************************************ 00:36:59.404 START TEST fio_dif_rand_params 00:36:59.404 ************************************ 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.404 bdev_null0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.404 [2024-11-20 17:19:51.438818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.404 { 00:36:59.404 "params": { 00:36:59.404 "name": "Nvme$subsystem", 00:36:59.404 "trtype": "$TEST_TRANSPORT", 00:36:59.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.404 "adrfam": "ipv4", 00:36:59.404 "trsvcid": "$NVMF_PORT", 00:36:59.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.404 "hdgst": ${hdgst:-false}, 00:36:59.404 "ddgst": ${ddgst:-false} 00:36:59.404 }, 00:36:59.404 "method": "bdev_nvme_attach_controller" 00:36:59.404 } 00:36:59.404 EOF 00:36:59.404 )") 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:59.404 "params": { 00:36:59.404 "name": "Nvme0", 00:36:59.404 "trtype": "tcp", 00:36:59.404 "traddr": "10.0.0.2", 00:36:59.404 "adrfam": "ipv4", 00:36:59.404 "trsvcid": "4420", 00:36:59.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.404 "hdgst": false, 00:36:59.404 "ddgst": false 00:36:59.404 }, 00:36:59.404 "method": "bdev_nvme_attach_controller" 00:36:59.404 }' 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:59.404 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:59.405 17:19:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.984 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:59.984 ... 00:36:59.984 fio-3.35 00:36:59.984 Starting 3 threads 00:37:05.264 00:37:05.264 filename0: (groupid=0, jobs=1): err= 0: pid=2271759: Wed Nov 20 17:19:57 2024 00:37:05.264 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(194MiB/5047msec) 00:37:05.264 slat (nsec): min=5575, max=33492, avg=8176.51, stdev=1574.69 00:37:05.264 clat (usec): min=5053, max=49269, avg=9710.67, stdev=3445.83 00:37:05.264 lat (usec): min=5061, max=49275, avg=9718.84, stdev=3445.72 00:37:05.264 clat percentiles (usec): 00:37:05.264 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8356], 00:37:05.264 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:37:05.264 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:37:05.264 | 99.00th=[12911], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:37:05.264 | 99.99th=[49021] 00:37:05.264 bw ( KiB/s): min=32768, max=42752, per=33.58%, avg=39705.60, stdev=2862.04, samples=10 00:37:05.264 iops : min= 256, max= 334, avg=310.20, stdev=22.36, samples=10 00:37:05.264 lat (msec) : 10=65.23%, 20=34.06%, 50=0.71% 00:37:05.264 cpu : usr=94.93%, sys=4.82%, ctx=6, majf=0, minf=75 00:37:05.264 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.264 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:05.264 filename0: (groupid=0, jobs=1): err= 0: pid=2271760: Wed Nov 20 17:19:57 2024 00:37:05.264 read: IOPS=285, BW=35.6MiB/s (37.4MB/s)(180MiB/5045msec) 00:37:05.264 slat (nsec): min=5701, max=31285, avg=7932.78, stdev=1637.22 00:37:05.264 clat (usec): min=5760, max=90053, avg=10485.53, stdev=5866.98 00:37:05.264 lat (usec): min=5769, max=90063, avg=10493.46, stdev=5867.11 00:37:05.264 clat percentiles (usec): 00:37:05.264 | 1.00th=[ 6718], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8717], 00:37:05.264 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:37:05.264 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11600], 00:37:05.264 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[89654], 00:37:05.264 | 99.99th=[89654] 00:37:05.264 bw ( KiB/s): min=26368, max=41728, per=31.09%, avg=36761.60, stdev=5142.99, samples=10 00:37:05.264 iops : min= 206, max= 326, avg=287.20, stdev=40.18, samples=10 00:37:05.264 lat (msec) : 10=56.47%, 20=41.59%, 50=1.46%, 100=0.49% 00:37:05.264 cpu : usr=94.45%, sys=5.29%, ctx=6, majf=0, minf=101 00:37:05.264 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.265 issued rwts: total=1438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:05.265 filename0: (groupid=0, jobs=1): err= 0: pid=2271761: Wed Nov 20 17:19:57 2024 00:37:05.265 read: IOPS=331, BW=41.4MiB/s (43.4MB/s)(209MiB/5046msec) 00:37:05.265 slat (nsec): min=5535, max=32637, avg=8101.47, stdev=1492.13 00:37:05.265 clat (usec): min=4934, max=51316, avg=9022.65, stdev=2501.87 00:37:05.265 lat (usec): min=4942, max=51325, avg=9030.75, stdev=2501.82 00:37:05.265 clat percentiles (usec): 00:37:05.265 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7898], 00:37:05.265 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:37:05.265 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10552], 00:37:05.265 | 99.00th=[11338], 99.50th=[12780], 99.90th=[51119], 99.95th=[51119], 00:37:05.265 | 99.99th=[51119] 00:37:05.265 bw ( KiB/s): min=37632, max=46848, per=36.14%, avg=42726.40, stdev=2696.99, samples=10 00:37:05.265 iops : min= 294, max= 366, avg=333.80, stdev=21.07, samples=10 00:37:05.265 lat (msec) : 10=86.24%, 20=13.46%, 50=0.12%, 100=0.18% 00:37:05.265 cpu : usr=93.76%, sys=5.99%, ctx=10, majf=0, minf=71 00:37:05.265 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.265 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:05.265 00:37:05.265 Run status group 0 (all jobs): 00:37:05.265 READ: bw=115MiB/s (121MB/s), 35.6MiB/s-41.4MiB/s (37.4MB/s-43.4MB/s), io=583MiB (611MB), run=5045-5047msec 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 bdev_null0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 [2024-11-20 17:19:57.577516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 bdev_null1 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:05.525 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.526 bdev_null2 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.526 { 00:37:05.526 "params": { 00:37:05.526 "name": "Nvme$subsystem", 00:37:05.526 "trtype": "$TEST_TRANSPORT", 00:37:05.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.526 "adrfam": "ipv4", 00:37:05.526 "trsvcid": "$NVMF_PORT", 00:37:05.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.526 "hdgst": ${hdgst:-false}, 00:37:05.526 "ddgst": ${ddgst:-false} 00:37:05.526 }, 00:37:05.526 "method": "bdev_nvme_attach_controller" 00:37:05.526 } 00:37:05.526 EOF 00:37:05.526 )") 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.526 { 00:37:05.526 "params": { 00:37:05.526 "name": "Nvme$subsystem", 00:37:05.526 "trtype": "$TEST_TRANSPORT", 00:37:05.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.526 "adrfam": "ipv4", 00:37:05.526 "trsvcid": "$NVMF_PORT", 00:37:05.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.526 "hdgst": ${hdgst:-false}, 00:37:05.526 "ddgst": ${ddgst:-false} 00:37:05.526 }, 00:37:05.526 "method": "bdev_nvme_attach_controller" 00:37:05.526 } 00:37:05.526 EOF 00:37:05.526 )") 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:05.526 17:19:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.787 { 00:37:05.787 "params": { 00:37:05.787 "name": "Nvme$subsystem", 00:37:05.787 "trtype": "$TEST_TRANSPORT", 00:37:05.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.787 "adrfam": "ipv4", 00:37:05.787 "trsvcid": "$NVMF_PORT", 00:37:05.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.787 "hdgst": ${hdgst:-false}, 00:37:05.787 "ddgst": ${ddgst:-false} 00:37:05.787 }, 00:37:05.787 "method": "bdev_nvme_attach_controller" 00:37:05.787 } 00:37:05.787 EOF 00:37:05.787 )") 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:05.787 "params": { 00:37:05.787 "name": "Nvme0", 00:37:05.787 "trtype": "tcp", 00:37:05.787 "traddr": "10.0.0.2", 00:37:05.787 "adrfam": "ipv4", 00:37:05.787 "trsvcid": "4420", 00:37:05.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.787 "hdgst": false, 00:37:05.787 "ddgst": false 00:37:05.787 }, 00:37:05.787 "method": "bdev_nvme_attach_controller" 00:37:05.787 },{ 00:37:05.787 "params": { 00:37:05.787 "name": "Nvme1", 00:37:05.787 "trtype": "tcp", 00:37:05.787 "traddr": "10.0.0.2", 00:37:05.787 "adrfam": "ipv4", 00:37:05.787 "trsvcid": "4420", 00:37:05.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.787 "hdgst": false, 00:37:05.787 "ddgst": false 00:37:05.787 }, 00:37:05.787 "method": "bdev_nvme_attach_controller" 00:37:05.787 },{ 00:37:05.787 "params": { 00:37:05.787 "name": "Nvme2", 00:37:05.787 "trtype": "tcp", 00:37:05.787 "traddr": "10.0.0.2", 00:37:05.787 "adrfam": "ipv4", 00:37:05.787 "trsvcid": "4420", 00:37:05.787 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:05.787 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:05.787 "hdgst": false, 00:37:05.787 "ddgst": false 00:37:05.787 }, 00:37:05.787 "method": "bdev_nvme_attach_controller" 00:37:05.787 }' 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:05.787 17:19:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.047 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:06.047 ... 00:37:06.047 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:06.047 ... 00:37:06.047 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:06.047 ... 00:37:06.047 fio-3.35 00:37:06.047 Starting 24 threads 00:37:18.268 00:37:18.268 filename0: (groupid=0, jobs=1): err= 0: pid=2272985: Wed Nov 20 17:20:08 2024 00:37:18.268 read: IOPS=690, BW=2761KiB/s (2827kB/s)(27.0MiB/10008msec) 00:37:18.268 slat (nsec): min=5634, max=80933, avg=14951.61, stdev=12487.62 00:37:18.268 clat (usec): min=8742, max=42494, avg=23081.28, stdev=4648.88 00:37:18.268 lat (usec): min=8749, max=42518, avg=23096.23, stdev=4650.65 00:37:18.268 clat percentiles (usec): 00:37:18.268 | 1.00th=[12649], 5.00th=[14877], 10.00th=[16909], 20.00th=[19006], 00:37:18.268 | 30.00th=[21890], 40.00th=[23462], 50.00th=[23987], 60.00th=[24249], 00:37:18.268 | 70.00th=[24511], 80.00th=[24773], 90.00th=[27919], 95.00th=[31065], 00:37:18.268 | 99.00th=[38536], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:37:18.268 | 99.99th=[42730] 00:37:18.268 bw ( KiB/s): min= 2560, max= 3104, per=4.31%, avg=2759.58, stdev=139.22, samples=19 00:37:18.268 iops : min= 640, max= 776, avg=689.89, stdev=34.80, samples=19 00:37:18.268 lat (msec) : 10=0.23%, 20=23.47%, 50=76.30% 00:37:18.268 cpu : usr=98.55%, sys=1.01%, ctx=67, majf=0, minf=23 00:37:18.268 IO depths : 1=1.5%, 2=3.1%, 4=9.8%, 8=73.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:37:18.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.268 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.268 issued rwts: total=6908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.268 filename0: (groupid=0, jobs=1): err= 0: pid=2272986: Wed Nov 20 17:20:08 2024 00:37:18.268 read: IOPS=655, BW=2621KiB/s (2684kB/s)(25.6MiB/10010msec) 00:37:18.268 slat (nsec): min=5655, max=70770, avg=16535.62, stdev=10662.76 00:37:18.268 clat (usec): min=12784, max=35858, avg=24279.48, stdev=1206.17 00:37:18.268 lat (usec): min=12790, max=35881, avg=24296.02, stdev=1206.44 00:37:18.268 clat percentiles (usec): 00:37:18.268 | 1.00th=[20841], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.268 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:37:18.268 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.268 | 99.00th=[27657], 99.50th=[28967], 99.90th=[35914], 99.95th=[35914], 00:37:18.268 | 99.99th=[35914] 00:37:18.268 bw ( KiB/s): min= 2432, max= 2688, per=4.08%, avg=2613.89, stdev=77.69, samples=19 00:37:18.268 iops : min= 608, max= 672, avg=653.47, stdev=19.42, samples=19 00:37:18.268 lat (msec) : 20=0.75%, 50=99.25% 00:37:18.268 cpu : usr=98.82%, sys=0.92%, ctx=14, majf=0, minf=17 00:37:18.268 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:18.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.268 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.268 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.268 filename0: (groupid=0, jobs=1): err= 0: pid=2272987: Wed Nov 20 17:20:08 2024 00:37:18.268 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10002msec) 00:37:18.268 slat (nsec): min=5655, max=72641, avg=21385.58, stdev=13297.48 00:37:18.268 clat (usec): min=12413, max=32715, avg=24218.57, stdev=1001.96 00:37:18.268 lat (usec): min=12423, max=32731, avg=24239.96, stdev=1001.62 00:37:18.268 clat percentiles (usec): 00:37:18.268 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:37:18.268 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.268 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:18.268 | 99.00th=[25822], 99.50th=[26084], 99.90th=[32637], 99.95th=[32637], 00:37:18.268 | 99.99th=[32637] 00:37:18.268 bw ( KiB/s): min= 2560, max= 2688, per=4.08%, avg=2613.89, stdev=64.93, samples=19 00:37:18.268 iops : min= 640, max= 672, avg=653.47, stdev=16.23, samples=19 00:37:18.268 lat (msec) : 20=0.49%, 50=99.51% 00:37:18.268 cpu : usr=98.90%, sys=0.84%, ctx=33, majf=0, minf=16 00:37:18.268 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:18.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.268 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.268 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.268 filename0: (groupid=0, jobs=1): err= 0: pid=2272988: Wed Nov 20 17:20:08 2024 00:37:18.269 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10017msec) 00:37:18.269 slat (nsec): min=5636, max=76294, avg=14330.18, stdev=12140.08 00:37:18.269 clat (usec): min=7788, max=41133, avg=23575.69, stdev=3004.51 00:37:18.269 lat (usec): min=7812, max=41143, avg=23590.02, stdev=3004.74 00:37:18.269 clat percentiles (usec): 00:37:18.269 | 1.00th=[ 9241], 5.00th=[17171], 10.00th=[21627], 20.00th=[23725], 00:37:18.269 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.269 | 70.00th=[24511], 80.00th=[24773], 90.00th=[24773], 95.00th=[25297], 00:37:18.269 | 99.00th=[28967], 99.50th=[31589], 99.90th=[39060], 99.95th=[40633], 00:37:18.269 | 99.99th=[41157] 00:37:18.269 bw ( KiB/s): min= 2560, max= 3296, per=4.22%, avg=2700.35, stdev=189.59, samples=20 00:37:18.269 iops : min= 640, max= 824, avg=675.05, stdev=47.42, samples=20 00:37:18.269 lat (msec) : 10=1.23%, 20=7.60%, 50=91.18% 00:37:18.269 cpu : usr=98.76%, sys=0.86%, ctx=126, majf=0, minf=15 00:37:18.269 IO depths : 1=5.0%, 2=10.6%, 4=22.9%, 8=54.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:18.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 issued rwts: total=6765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.269 filename0: (groupid=0, jobs=1): err= 0: pid=2272989: Wed Nov 20 17:20:08 2024 00:37:18.269 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10004msec) 00:37:18.269 slat (nsec): min=5364, max=86827, avg=19682.97, stdev=13219.93 00:37:18.269 clat (usec): min=12828, max=31708, avg=24222.69, stdev=1010.49 00:37:18.269 lat (usec): min=12837, max=31723, avg=24242.37, stdev=1009.02 00:37:18.269 clat percentiles (usec): 00:37:18.269 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:18.269 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.269 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.269 | 99.00th=[25822], 99.50th=[28705], 99.90th=[31589], 99.95th=[31589], 00:37:18.269 | 99.99th=[31589] 00:37:18.269 bw ( KiB/s): min= 2560, max= 2688, per=4.08%, avg=2613.89, stdev=64.93, samples=19 00:37:18.269 iops : min= 640, max= 672, avg=653.47, stdev=16.23, samples=19 00:37:18.269 lat (msec) : 20=0.58%, 50=99.42% 00:37:18.269 cpu : usr=98.63%, sys=0.93%, ctx=66, majf=0, minf=13 00:37:18.269 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:18.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.269 filename0: (groupid=0, jobs=1): err= 0: pid=2272990: Wed Nov 20 17:20:08 2024 00:37:18.269 read: IOPS=661, BW=2646KiB/s (2709kB/s)(25.9MiB/10014msec) 00:37:18.269 slat (nsec): min=5679, max=82607, avg=18264.30, stdev=14146.53 00:37:18.269 clat (usec): min=5537, max=28706, avg=24038.12, stdev=1909.44 00:37:18.269 lat (usec): min=5561, max=28730, avg=24056.38, stdev=1908.44 00:37:18.269 clat percentiles (usec): 00:37:18.269 | 1.00th=[11600], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.269 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.269 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25035], 00:37:18.269 | 99.00th=[25822], 99.50th=[26084], 99.90th=[28443], 99.95th=[28705], 00:37:18.269 | 99.99th=[28705] 00:37:18.269 bw ( KiB/s): min= 2560, max= 3072, per=4.13%, avg=2643.95, stdev=119.21, samples=20 00:37:18.269 iops : min= 640, max= 768, avg=660.95, stdev=29.81, samples=20 00:37:18.269 lat (msec) : 10=0.65%, 20=1.31%, 50=98.04% 00:37:18.269 cpu : usr=98.61%, sys=0.90%, ctx=86, majf=0, minf=19 00:37:18.269 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:18.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.269 filename0: (groupid=0, jobs=1): err= 0: pid=2272991: Wed Nov 20 17:20:08 2024 00:37:18.269 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.2MiB/10016msec) 00:37:18.269 slat (nsec): min=5644, max=77492, avg=10943.86, stdev=7125.17 00:37:18.269 clat (usec): min=1170, max=38102, avg=23762.27, stdev=3264.05 00:37:18.269 lat (usec): min=1188, max=38109, avg=23773.22, stdev=3262.74 00:37:18.269 clat percentiles (usec): 00:37:18.269 | 1.00th=[ 3490], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.269 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:37:18.269 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.269 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26870], 99.95th=[37487], 00:37:18.269 | 99.99th=[38011] 00:37:18.269 bw ( KiB/s): min= 2560, max= 3816, per=4.19%, avg=2680.40, stdev=274.83, samples=20 00:37:18.269 iops : min= 640, max= 954, avg=670.10, stdev=68.71, samples=20 00:37:18.269 lat (msec) : 2=0.09%, 4=0.92%, 10=1.68%, 20=0.42%, 50=96.89% 00:37:18.269 cpu : usr=98.57%, sys=0.97%, ctx=106, majf=0, minf=29 00:37:18.269 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:18.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 issued rwts: total=6717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.269 filename0: (groupid=0, jobs=1): err= 0: pid=2272992: Wed Nov 20 17:20:08 2024 00:37:18.269 read: IOPS=680, BW=2721KiB/s (2787kB/s)(26.6MiB/10020msec) 00:37:18.269 slat (nsec): min=5635, max=73845, avg=12016.57, stdev=8323.67 00:37:18.269 clat (usec): min=1544, max=41022, avg=23422.64, stdev=3699.14 00:37:18.269 lat (usec): min=1560, max=41029, avg=23434.65, stdev=3698.65 00:37:18.269 clat percentiles (usec): 00:37:18.269 | 1.00th=[ 4686], 5.00th=[14615], 10.00th=[23462], 20.00th=[23725], 00:37:18.269 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.269 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.269 | 99.00th=[26608], 99.50th=[29492], 99.90th=[35914], 99.95th=[41157], 00:37:18.269 | 99.99th=[41157] 00:37:18.269 bw ( KiB/s): min= 2560, max= 4040, per=4.25%, avg=2720.40, stdev=320.09, samples=20 00:37:18.269 iops : min= 640, max= 1010, avg=680.10, stdev=80.02, samples=20 00:37:18.269 lat (msec) : 2=0.09%, 4=0.81%, 10=1.44%, 20=5.05%, 50=92.62% 00:37:18.269 cpu : usr=98.80%, sys=0.92%, ctx=16, majf=0, minf=26 00:37:18.269 IO depths : 1=5.5%, 2=11.2%, 4=23.1%, 8=53.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:18.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.269 issued rwts: total=6817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.269 filename1: (groupid=0, jobs=1): err= 0: pid=2272993: Wed Nov 20 17:20:08 2024 00:37:18.269 read: IOPS=659, BW=2637KiB/s (2701kB/s)(25.8MiB/10001msec) 00:37:18.269 slat (nsec): min=5634, max=76208, avg=22405.67, stdev=12671.03 00:37:18.269 clat (usec): min=8262, max=48379, avg=24060.96, stdev=1911.88 00:37:18.269 lat (usec): min=8268, max=48397, avg=24083.36, stdev=1912.77 00:37:18.269 clat percentiles (usec): 00:37:18.269 | 1.00th=[15664], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:18.269 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.269 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:18.269 | 99.00th=[28705], 99.50th=[30540], 99.90th=[43254], 99.95th=[43254], 00:37:18.269 | 99.99th=[48497] 00:37:18.270 bw ( KiB/s): min= 2432, max= 2912, per=4.11%, avg=2628.21, stdev=101.45, samples=19 00:37:18.270 iops : min= 608, max= 728, avg=657.05, stdev=25.36, samples=19 00:37:18.270 lat (msec) : 10=0.24%, 20=2.24%, 50=97.51% 00:37:18.270 cpu : usr=98.96%, sys=0.77%, ctx=14, majf=0, minf=25 00:37:18.270 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 issued rwts: total=6594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.270 filename1: (groupid=0, jobs=1): err= 0: pid=2272994: Wed Nov 20 17:20:08 2024 00:37:18.270 read: IOPS=661, BW=2647KiB/s (2711kB/s)(25.9MiB/10005msec) 00:37:18.270 slat (nsec): min=5407, max=74704, avg=19452.76, stdev=13022.39 00:37:18.270 clat (usec): min=7099, max=51274, avg=24013.01, stdev=3322.76 00:37:18.270 lat (usec): min=7104, max=51290, avg=24032.47, stdev=3323.94 00:37:18.270 clat percentiles (usec): 00:37:18.270 | 1.00th=[14222], 5.00th=[17957], 10.00th=[21365], 20.00th=[23725], 00:37:18.270 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:18.270 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[28967], 00:37:18.270 | 99.00th=[36963], 99.50th=[39584], 99.90th=[43779], 99.95th=[44303], 00:37:18.270 | 99.99th=[51119] 00:37:18.270 bw ( KiB/s): min= 2560, max= 2800, per=4.13%, avg=2644.21, stdev=83.28, samples=19 00:37:18.270 iops : min= 640, max= 700, avg=661.05, stdev=20.82, samples=19 00:37:18.270 lat (msec) : 10=0.21%, 20=8.22%, 50=91.53%, 100=0.05% 00:37:18.270 cpu : usr=98.62%, sys=1.07%, ctx=71, majf=0, minf=20 00:37:18.270 IO depths : 1=3.9%, 2=7.7%, 4=17.4%, 8=61.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 complete : 0=0.0%, 4=92.2%, 8=2.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 issued rwts: total=6622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.270 filename1: (groupid=0, jobs=1): err= 0: pid=2272995: Wed Nov 20 17:20:08 2024 00:37:18.270 read: IOPS=655, BW=2624KiB/s (2687kB/s)(25.6MiB/10001msec) 00:37:18.270 slat (nsec): min=5753, max=73114, avg=24005.59, stdev=11975.30 00:37:18.270 clat (usec): min=8227, max=48503, avg=24174.19, stdev=2031.87 00:37:18.270 lat (usec): min=8240, max=48523, avg=24198.20, stdev=2031.83 00:37:18.270 clat percentiles (usec): 00:37:18.270 | 1.00th=[15664], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.270 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.270 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:18.270 | 99.00th=[29754], 99.50th=[32900], 99.90th=[48497], 99.95th=[48497], 00:37:18.270 | 99.99th=[48497] 00:37:18.270 bw ( KiB/s): min= 2432, max= 2688, per=4.08%, avg=2613.89, stdev=77.88, samples=19 00:37:18.270 iops : min= 608, max= 672, avg=653.47, stdev=19.47, samples=19 00:37:18.270 lat (msec) : 10=0.24%, 20=1.37%, 50=98.38% 00:37:18.270 cpu : usr=98.98%, sys=0.75%, ctx=14, majf=0, minf=18 00:37:18.270 IO depths : 1=5.5%, 2=11.6%, 4=24.5%, 8=51.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.270 filename1: (groupid=0, jobs=1): err= 0: pid=2272996: Wed Nov 20 17:20:08 2024 00:37:18.270 read: IOPS=661, BW=2647KiB/s (2710kB/s)(25.9MiB/10002msec) 00:37:18.270 slat (nsec): min=5633, max=70468, avg=21245.74, stdev=12021.38 00:37:18.270 clat (usec): min=8155, max=49109, avg=24004.95, stdev=2675.90 00:37:18.270 lat (usec): min=8169, max=49129, avg=24026.20, stdev=2676.90 00:37:18.270 clat percentiles (usec): 00:37:18.270 | 1.00th=[14484], 5.00th=[19530], 10.00th=[23462], 20.00th=[23725], 00:37:18.270 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.270 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:37:18.270 | 99.00th=[32375], 99.50th=[38536], 99.90th=[49021], 99.95th=[49021], 00:37:18.270 | 99.99th=[49021] 00:37:18.270 bw ( KiB/s): min= 2436, max= 2880, per=4.12%, avg=2638.53, stdev=96.10, samples=19 00:37:18.270 iops : min= 609, max= 720, avg=659.63, stdev=24.03, samples=19 00:37:18.270 lat (msec) : 10=0.24%, 20=5.08%, 50=94.68% 00:37:18.270 cpu : usr=98.70%, sys=0.99%, ctx=91, majf=0, minf=25 00:37:18.270 IO depths : 1=4.1%, 2=9.2%, 4=21.2%, 8=56.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:37:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 complete : 0=0.0%, 4=93.2%, 8=1.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 issued rwts: total=6618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.270 filename1: (groupid=0, jobs=1): err= 0: pid=2272997: Wed Nov 20 17:20:08 2024 00:37:18.270 read: IOPS=657, BW=2631KiB/s (2694kB/s)(25.7MiB/10008msec) 00:37:18.270 slat (nsec): min=5656, max=87117, avg=14686.65, stdev=12388.54 00:37:18.270 clat (usec): min=10550, max=38931, avg=24211.05, stdev=1433.27 00:37:18.270 lat (usec): min=10561, max=38937, avg=24225.74, stdev=1432.52 00:37:18.270 clat percentiles (usec): 00:37:18.270 | 1.00th=[17171], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.270 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:37:18.270 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.270 | 99.00th=[27657], 99.50th=[28967], 99.90th=[38536], 99.95th=[39060], 00:37:18.270 | 99.99th=[39060] 00:37:18.270 bw ( KiB/s): min= 2560, max= 2736, per=4.11%, avg=2629.89, stdev=68.97, samples=19 00:37:18.270 iops : min= 640, max= 684, avg=657.47, stdev=17.24, samples=19 00:37:18.270 lat (msec) : 20=1.61%, 50=98.39% 00:37:18.270 cpu : usr=98.47%, sys=1.07%, ctx=124, majf=0, minf=17 00:37:18.270 IO depths : 1=5.9%, 2=11.9%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.270 filename1: (groupid=0, jobs=1): err= 0: pid=2272998: Wed Nov 20 17:20:08 2024 00:37:18.270 read: IOPS=655, BW=2624KiB/s (2687kB/s)(25.6MiB/10001msec) 00:37:18.270 slat (nsec): min=5686, max=85344, avg=27773.40, stdev=15679.46 00:37:18.270 clat (usec): min=8201, max=48501, avg=24117.94, stdev=1702.63 00:37:18.270 lat (usec): min=8208, max=48520, avg=24145.71, stdev=1702.94 00:37:18.270 clat percentiles (usec): 00:37:18.270 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:18.270 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:18.270 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:18.270 | 99.00th=[25822], 99.50th=[26084], 99.90th=[48497], 99.95th=[48497], 00:37:18.270 | 99.99th=[48497] 00:37:18.270 bw ( KiB/s): min= 2432, max= 2688, per=4.08%, avg=2613.89, stdev=77.69, samples=19 00:37:18.270 iops : min= 608, max= 672, avg=653.47, stdev=19.42, samples=19 00:37:18.270 lat (msec) : 10=0.24%, 20=0.49%, 50=99.27% 00:37:18.270 cpu : usr=98.93%, sys=0.69%, ctx=95, majf=0, minf=27 00:37:18.270 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.270 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename1: (groupid=0, jobs=1): err= 0: pid=2272999: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=656, BW=2627KiB/s (2690kB/s)(25.7MiB/10009msec) 00:37:18.271 slat (nsec): min=5638, max=64531, avg=17523.71, stdev=10487.28 00:37:18.271 clat (usec): min=8633, max=41041, avg=24199.92, stdev=1663.34 00:37:18.271 lat (usec): min=8643, max=41052, avg=24217.44, stdev=1663.52 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[16712], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.271 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.271 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.271 | 99.00th=[30802], 99.50th=[32113], 99.90th=[37487], 99.95th=[41157], 00:37:18.271 | 99.99th=[41157] 00:37:18.271 bw ( KiB/s): min= 2560, max= 2720, per=4.10%, avg=2622.32, stdev=67.86, samples=19 00:37:18.271 iops : min= 640, max= 680, avg=655.58, stdev=16.97, samples=19 00:37:18.271 lat (msec) : 10=0.14%, 20=1.63%, 50=98.24% 00:37:18.271 cpu : usr=98.92%, sys=0.74%, ctx=56, majf=0, minf=20 00:37:18.271 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:18.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename1: (groupid=0, jobs=1): err= 0: pid=2273000: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=654, BW=2616KiB/s (2679kB/s)(25.7MiB/10050msec) 00:37:18.271 slat (nsec): min=5643, max=76289, avg=12243.81, stdev=9148.15 00:37:18.271 clat (usec): min=8758, max=56120, avg=24341.26, stdev=1843.95 00:37:18.271 lat (usec): min=8768, max=56153, avg=24353.50, stdev=1844.68 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[20055], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:18.271 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:37:18.271 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25560], 00:37:18.271 | 99.00th=[28443], 99.50th=[28967], 99.90th=[55837], 99.95th=[55837], 00:37:18.271 | 99.99th=[56361] 00:37:18.271 bw ( KiB/s): min= 2560, max= 2792, per=4.10%, avg=2626.11, stdev=75.23, samples=19 00:37:18.271 iops : min= 640, max= 698, avg=656.53, stdev=18.81, samples=19 00:37:18.271 lat (msec) : 10=0.11%, 20=0.84%, 50=98.87%, 100=0.18% 00:37:18.271 cpu : usr=98.59%, sys=0.99%, ctx=125, majf=0, minf=22 00:37:18.271 IO depths : 1=4.2%, 2=10.1%, 4=23.5%, 8=53.9%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:18.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename2: (groupid=0, jobs=1): err= 0: pid=2273001: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.7MiB/10002msec) 00:37:18.271 slat (nsec): min=5631, max=73536, avg=19424.78, stdev=11939.67 00:37:18.271 clat (usec): min=8268, max=44012, avg=24121.67, stdev=2378.12 00:37:18.271 lat (usec): min=8294, max=44032, avg=24141.09, stdev=2378.44 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[14615], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:18.271 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.271 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.271 | 99.00th=[32900], 99.50th=[36963], 99.90th=[43779], 99.95th=[43779], 00:37:18.271 | 99.99th=[43779] 00:37:18.271 bw ( KiB/s): min= 2436, max= 2864, per=4.10%, avg=2625.89, stdev=95.98, samples=19 00:37:18.271 iops : min= 609, max= 716, avg=656.47, stdev=24.00, samples=19 00:37:18.271 lat (msec) : 10=0.30%, 20=2.88%, 50=96.81% 00:37:18.271 cpu : usr=98.19%, sys=1.20%, ctx=155, majf=0, minf=36 00:37:18.271 IO depths : 1=5.6%, 2=11.6%, 4=24.0%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:18.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename2: (groupid=0, jobs=1): err= 0: pid=2273002: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=662, BW=2651KiB/s (2715kB/s)(25.9MiB/10003msec) 00:37:18.271 slat (nsec): min=5517, max=77362, avg=19129.82, stdev=13463.14 00:37:18.271 clat (usec): min=7940, max=45398, avg=23962.42, stdev=2422.40 00:37:18.271 lat (usec): min=7946, max=45416, avg=23981.55, stdev=2423.25 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[13435], 5.00th=[20579], 10.00th=[23462], 20.00th=[23725], 00:37:18.271 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.271 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:37:18.271 | 99.00th=[30802], 99.50th=[33817], 99.90th=[45351], 99.95th=[45351], 00:37:18.271 | 99.99th=[45351] 00:37:18.271 bw ( KiB/s): min= 2432, max= 2880, per=4.13%, avg=2643.37, stdev=111.18, samples=19 00:37:18.271 iops : min= 608, max= 720, avg=660.84, stdev=27.80, samples=19 00:37:18.271 lat (msec) : 10=0.24%, 20=4.21%, 50=95.55% 00:37:18.271 cpu : usr=98.64%, sys=0.93%, ctx=63, majf=0, minf=29 00:37:18.271 IO depths : 1=5.4%, 2=11.0%, 4=22.9%, 8=53.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:18.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename2: (groupid=0, jobs=1): err= 0: pid=2273003: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10002msec) 00:37:18.271 slat (nsec): min=5675, max=81380, avg=23865.62, stdev=14951.91 00:37:18.271 clat (usec): min=12629, max=32755, avg=24195.71, stdev=1044.38 00:37:18.271 lat (usec): min=12643, max=32771, avg=24219.58, stdev=1042.80 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:18.271 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.271 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:18.271 | 99.00th=[25822], 99.50th=[28705], 99.90th=[32637], 99.95th=[32637], 00:37:18.271 | 99.99th=[32637] 00:37:18.271 bw ( KiB/s): min= 2560, max= 2688, per=4.08%, avg=2613.89, stdev=64.93, samples=19 00:37:18.271 iops : min= 640, max= 672, avg=653.47, stdev=16.23, samples=19 00:37:18.271 lat (msec) : 20=0.55%, 50=99.45% 00:37:18.271 cpu : usr=99.05%, sys=0.69%, ctx=16, majf=0, minf=18 00:37:18.271 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:18.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename2: (groupid=0, jobs=1): err= 0: pid=2273004: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=661, BW=2645KiB/s (2708kB/s)(25.8MiB/10003msec) 00:37:18.271 slat (nsec): min=5630, max=84724, avg=24190.10, stdev=14365.99 00:37:18.271 clat (usec): min=8345, max=50162, avg=23969.35, stdev=2119.44 00:37:18.271 lat (usec): min=8351, max=50184, avg=23993.54, stdev=2120.55 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[15139], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:37:18.271 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.271 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:18.271 | 99.00th=[27919], 99.50th=[30540], 99.90th=[45351], 99.95th=[45351], 00:37:18.271 | 99.99th=[50070] 00:37:18.271 bw ( KiB/s): min= 2432, max= 2864, per=4.12%, avg=2636.63, stdev=103.77, samples=19 00:37:18.271 iops : min= 608, max= 716, avg=659.16, stdev=25.94, samples=19 00:37:18.271 lat (msec) : 10=0.36%, 20=2.89%, 50=96.72%, 100=0.03% 00:37:18.271 cpu : usr=98.48%, sys=1.03%, ctx=130, majf=0, minf=19 00:37:18.271 IO depths : 1=5.7%, 2=11.6%, 4=24.0%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:18.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.271 issued rwts: total=6614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.271 filename2: (groupid=0, jobs=1): err= 0: pid=2273005: Wed Nov 20 17:20:08 2024 00:37:18.271 read: IOPS=830, BW=3320KiB/s (3400kB/s)(32.5MiB/10014msec) 00:37:18.271 slat (nsec): min=5636, max=83636, avg=8879.83, stdev=6721.00 00:37:18.271 clat (usec): min=2971, max=40349, avg=19212.66, stdev=4684.47 00:37:18.271 lat (usec): min=2990, max=40355, avg=19221.54, stdev=4686.40 00:37:18.271 clat percentiles (usec): 00:37:18.271 | 1.00th=[ 7308], 5.00th=[13829], 10.00th=[14353], 20.00th=[14877], 00:37:18.272 | 30.00th=[15533], 40.00th=[17433], 50.00th=[19006], 60.00th=[20055], 00:37:18.272 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:18.272 | 99.00th=[28705], 99.50th=[35914], 99.90th=[39584], 99.95th=[40109], 00:37:18.272 | 99.99th=[40109] 00:37:18.272 bw ( KiB/s): min= 2560, max= 3920, per=5.19%, avg=3320.80, stdev=554.58, samples=20 00:37:18.272 iops : min= 640, max= 980, avg=830.20, stdev=138.65, samples=20 00:37:18.272 lat (msec) : 4=0.31%, 10=2.35%, 20=57.59%, 50=39.75% 00:37:18.272 cpu : usr=98.63%, sys=1.03%, ctx=143, majf=0, minf=26 00:37:18.272 IO depths : 1=1.8%, 2=3.9%, 4=12.6%, 8=70.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:37:18.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 complete : 0=0.0%, 4=90.7%, 8=3.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 issued rwts: total=8312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.272 filename2: (groupid=0, jobs=1): err= 0: pid=2273006: Wed Nov 20 17:20:08 2024 00:37:18.272 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10014msec) 00:37:18.272 slat (nsec): min=5635, max=79517, avg=14146.65, stdev=11662.95 00:37:18.272 clat (usec): min=11599, max=48661, avg=24269.75, stdev=3869.33 00:37:18.272 lat (usec): min=11609, max=48669, avg=24283.90, stdev=3870.62 00:37:18.272 clat percentiles (usec): 00:37:18.272 | 1.00th=[14353], 5.00th=[17171], 10.00th=[19792], 20.00th=[23725], 00:37:18.272 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:37:18.272 | 70.00th=[24511], 80.00th=[25035], 90.00th=[27919], 95.00th=[31327], 00:37:18.272 | 99.00th=[38536], 99.50th=[40109], 99.90th=[48497], 99.95th=[48497], 00:37:18.272 | 99.99th=[48497] 00:37:18.272 bw ( KiB/s): min= 2520, max= 2896, per=4.11%, avg=2628.40, stdev=97.38, samples=20 00:37:18.272 iops : min= 630, max= 724, avg=657.10, stdev=24.34, samples=20 00:37:18.272 lat (msec) : 20=10.36%, 50=89.64% 00:37:18.272 cpu : usr=98.17%, sys=1.16%, ctx=189, majf=0, minf=21 00:37:18.272 IO depths : 1=0.3%, 2=0.7%, 4=4.3%, 8=78.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:18.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 complete : 0=0.0%, 4=87.7%, 8=10.2%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 issued rwts: total=6581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.272 filename2: (groupid=0, jobs=1): err= 0: pid=2273007: Wed Nov 20 17:20:08 2024 00:37:18.272 read: IOPS=668, BW=2675KiB/s (2739kB/s)(26.1MiB/10010msec) 00:37:18.272 slat (nsec): min=5635, max=80508, avg=13624.00, stdev=11466.25 00:37:18.272 clat (usec): min=9606, max=35395, avg=23816.07, stdev=2245.84 00:37:18.272 lat (usec): min=9630, max=35406, avg=23829.70, stdev=2245.98 00:37:18.272 clat percentiles (usec): 00:37:18.272 | 1.00th=[12518], 5.00th=[19268], 10.00th=[23462], 20.00th=[23725], 00:37:18.272 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:37:18.272 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:37:18.272 | 99.00th=[25822], 99.50th=[28967], 99.90th=[35390], 99.95th=[35390], 00:37:18.272 | 99.99th=[35390] 00:37:18.272 bw ( KiB/s): min= 2560, max= 3200, per=4.17%, avg=2671.20, stdev=156.55, samples=20 00:37:18.272 iops : min= 640, max= 800, avg=667.80, stdev=39.14, samples=20 00:37:18.272 lat (msec) : 10=0.36%, 20=5.11%, 50=94.53% 00:37:18.272 cpu : usr=98.83%, sys=0.81%, ctx=65, majf=0, minf=21 00:37:18.272 IO depths : 1=5.3%, 2=11.1%, 4=23.8%, 8=52.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:18.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.272 filename2: (groupid=0, jobs=1): err= 0: pid=2273008: Wed Nov 20 17:20:08 2024 00:37:18.272 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10008msec) 00:37:18.272 slat (nsec): min=5794, max=86071, avg=20347.86, stdev=13319.06 00:37:18.272 clat (usec): min=9183, max=29040, avg=24112.11, stdev=1412.87 00:37:18.272 lat (usec): min=9191, max=29070, avg=24132.46, stdev=1412.06 00:37:18.272 clat percentiles (usec): 00:37:18.272 | 1.00th=[16057], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:18.272 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:18.272 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25035], 00:37:18.272 | 99.00th=[25822], 99.50th=[25822], 99.90th=[28705], 99.95th=[28967], 00:37:18.272 | 99.99th=[28967] 00:37:18.272 bw ( KiB/s): min= 2560, max= 2816, per=4.11%, avg=2630.40, stdev=77.42, samples=20 00:37:18.272 iops : min= 640, max= 704, avg=657.60, stdev=19.35, samples=20 00:37:18.272 lat (msec) : 10=0.06%, 20=1.12%, 50=98.82% 00:37:18.272 cpu : usr=98.92%, sys=0.80%, ctx=11, majf=0, minf=19 00:37:18.272 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:18.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.272 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:18.272 00:37:18.272 Run status group 0 (all jobs): 00:37:18.272 READ: bw=62.5MiB/s (65.5MB/s), 2616KiB/s-3320KiB/s (2679kB/s-3400kB/s), io=628MiB (658MB), run=10001-10050msec 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:18.272 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 bdev_null0 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 [2024-11-20 17:20:09.252860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 bdev_null1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.273 { 00:37:18.273 "params": { 00:37:18.273 "name": "Nvme$subsystem", 00:37:18.273 "trtype": "$TEST_TRANSPORT", 00:37:18.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.273 "adrfam": "ipv4", 00:37:18.273 "trsvcid": "$NVMF_PORT", 00:37:18.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.273 "hdgst": ${hdgst:-false}, 00:37:18.273 "ddgst": ${ddgst:-false} 00:37:18.273 }, 00:37:18.273 "method": "bdev_nvme_attach_controller" 00:37:18.273 } 00:37:18.273 EOF 00:37:18.273 )") 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.273 { 00:37:18.273 "params": { 00:37:18.273 "name": "Nvme$subsystem", 00:37:18.273 "trtype": "$TEST_TRANSPORT", 00:37:18.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.273 "adrfam": "ipv4", 00:37:18.273 "trsvcid": "$NVMF_PORT", 00:37:18.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.273 "hdgst": ${hdgst:-false}, 00:37:18.273 "ddgst": ${ddgst:-false} 00:37:18.273 }, 00:37:18.273 "method": "bdev_nvme_attach_controller" 00:37:18.273 } 00:37:18.273 EOF 00:37:18.273 )") 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:18.273 17:20:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:18.273 "params": { 00:37:18.273 "name": "Nvme0", 00:37:18.273 "trtype": "tcp", 00:37:18.273 "traddr": "10.0.0.2", 00:37:18.274 "adrfam": "ipv4", 00:37:18.274 "trsvcid": "4420", 00:37:18.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:18.274 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:18.274 "hdgst": false, 00:37:18.274 "ddgst": false 00:37:18.274 }, 00:37:18.274 "method": "bdev_nvme_attach_controller" 00:37:18.274 },{ 00:37:18.274 "params": { 00:37:18.274 "name": "Nvme1", 00:37:18.274 "trtype": "tcp", 00:37:18.274 "traddr": "10.0.0.2", 00:37:18.274 "adrfam": "ipv4", 00:37:18.274 "trsvcid": "4420", 00:37:18.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:18.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:18.274 "hdgst": false, 00:37:18.274 "ddgst": false 00:37:18.274 }, 00:37:18.274 "method": "bdev_nvme_attach_controller" 00:37:18.274 }' 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:18.274 17:20:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.274 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:18.274 ... 00:37:18.274 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:18.274 ... 00:37:18.274 fio-3.35 00:37:18.274 Starting 4 threads 00:37:23.555 00:37:23.555 filename0: (groupid=0, jobs=1): err= 0: pid=2275495: Wed Nov 20 17:20:15 2024 00:37:23.555 read: IOPS=2976, BW=23.3MiB/s (24.4MB/s)(116MiB/5003msec) 00:37:23.555 slat (nsec): min=5481, max=59314, avg=6097.01, stdev=1940.88 00:37:23.555 clat (usec): min=970, max=4585, avg=2672.08, stdev=327.14 00:37:23.555 lat (usec): min=987, max=4591, avg=2678.17, stdev=327.02 00:37:23.555 clat percentiles (usec): 00:37:23.555 | 1.00th=[ 1860], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:37:23.555 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:23.555 | 70.00th=[ 2769], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3097], 00:37:23.555 | 99.00th=[ 3818], 99.50th=[ 4015], 99.90th=[ 4293], 99.95th=[ 4424], 00:37:23.555 | 99.99th=[ 4555] 00:37:23.555 bw ( KiB/s): min=23504, max=24032, per=25.49%, avg=23841.78, stdev=180.21, samples=9 00:37:23.555 iops : min= 2938, max= 3004, avg=2980.22, stdev=22.53, samples=9 00:37:23.555 lat (usec) : 1000=0.01% 00:37:23.555 lat (msec) : 2=2.01%, 4=97.43%, 10=0.54% 00:37:23.555 cpu : usr=96.64%, sys=3.10%, ctx=6, majf=0, minf=24 00:37:23.555 IO depths : 1=0.1%, 2=0.3%, 4=68.6%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.555 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.555 issued rwts: total=14892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.555 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.555 filename0: (groupid=0, jobs=1): err= 0: pid=2275496: Wed Nov 20 17:20:15 2024 00:37:23.555 read: IOPS=3030, BW=23.7MiB/s (24.8MB/s)(118MiB/5001msec) 00:37:23.555 slat (nsec): min=5497, max=59432, avg=6094.41, stdev=1629.36 00:37:23.555 clat (usec): min=1209, max=4729, avg=2624.05, stdev=418.79 00:37:23.555 lat (usec): min=1215, max=4740, avg=2630.14, stdev=418.82 00:37:23.555 clat percentiles (usec): 00:37:23.555 | 1.00th=[ 1811], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2278], 00:37:23.555 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2704], 00:37:23.555 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 3228], 95.00th=[ 3490], 00:37:23.555 | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 4293], 99.95th=[ 4490], 00:37:23.555 | 99.99th=[ 4752] 00:37:23.555 bw ( KiB/s): min=23632, max=24592, per=25.90%, avg=24225.78, stdev=329.60, samples=9 00:37:23.555 iops : min= 2954, max= 3074, avg=3028.22, stdev=41.20, samples=9 00:37:23.555 lat (msec) : 2=3.53%, 4=95.61%, 10=0.86% 00:37:23.555 cpu : usr=97.32%, sys=2.44%, ctx=11, majf=0, minf=80 00:37:23.555 IO depths : 1=0.1%, 2=0.5%, 4=70.0%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.555 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.555 issued rwts: total=15154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.556 filename1: (groupid=0, jobs=1): err= 0: pid=2275497: Wed Nov 20 17:20:15 2024 00:37:23.556 read: IOPS=2852, BW=22.3MiB/s (23.4MB/s)(111MiB/5001msec) 00:37:23.556 slat (nsec): min=5485, max=32650, avg=5950.30, stdev=1502.07 00:37:23.556 clat (usec): min=878, max=6088, avg=2787.87, stdev=324.69 00:37:23.556 lat (usec): min=884, max=6116, avg=2793.82, stdev=324.77 00:37:23.556 clat percentiles (usec): 00:37:23.556 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2573], 00:37:23.556 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:23.556 | 70.00th=[ 2835], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3326], 00:37:23.556 | 99.00th=[ 4146], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5997], 00:37:23.556 | 99.99th=[ 6063] 00:37:23.556 bw ( KiB/s): min=22448, max=23184, per=24.38%, avg=22803.56, stdev=252.57, samples=9 00:37:23.556 iops : min= 2806, max= 2898, avg=2850.44, stdev=31.57, samples=9 00:37:23.556 lat (usec) : 1000=0.01% 00:37:23.556 lat (msec) : 2=0.22%, 4=98.29%, 10=1.47% 00:37:23.556 cpu : usr=96.72%, sys=3.04%, ctx=6, majf=0, minf=36 00:37:23.556 IO depths : 1=0.1%, 2=0.3%, 4=73.4%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.556 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.556 issued rwts: total=14265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.556 filename1: (groupid=0, jobs=1): err= 0: pid=2275498: Wed Nov 20 17:20:15 2024 00:37:23.556 read: IOPS=2836, BW=22.2MiB/s (23.2MB/s)(111MiB/5002msec) 00:37:23.556 slat (nsec): min=5482, max=61113, avg=6016.67, stdev=1721.16 00:37:23.556 clat (usec): min=1438, max=6203, avg=2803.51, stdev=419.36 00:37:23.556 lat (usec): min=1444, max=6209, avg=2809.52, stdev=419.32 00:37:23.556 clat percentiles (usec): 00:37:23.556 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:37:23.556 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:23.556 | 70.00th=[ 2835], 80.00th=[ 2933], 90.00th=[ 3261], 95.00th=[ 3884], 00:37:23.556 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4555], 99.95th=[ 4752], 00:37:23.556 | 99.99th=[ 6194] 00:37:23.556 bw ( KiB/s): min=22576, max=22896, per=24.27%, avg=22704.00, stdev=111.14, samples=9 00:37:23.556 iops : min= 2822, max= 2862, avg=2838.00, stdev=13.89, samples=9 00:37:23.556 lat (msec) : 2=0.45%, 4=95.86%, 10=3.69% 00:37:23.556 cpu : usr=96.56%, sys=2.96%, ctx=186, majf=0, minf=36 00:37:23.556 IO depths : 1=0.1%, 2=0.3%, 4=72.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.556 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.556 issued rwts: total=14189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.556 00:37:23.556 Run status group 0 (all jobs): 00:37:23.556 READ: bw=91.4MiB/s (95.8MB/s), 22.2MiB/s-23.7MiB/s (23.2MB/s-24.8MB/s), io=457MiB (479MB), run=5001-5003msec 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.556 00:37:23.556 real 0m24.301s 00:37:23.556 user 5m15.847s 00:37:23.556 sys 0m4.737s 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.556 17:20:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.556 ************************************ 00:37:23.556 END TEST fio_dif_rand_params 00:37:23.556 ************************************ 00:37:23.817 17:20:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:23.817 17:20:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.817 17:20:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.817 17:20:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 ************************************ 00:37:23.817 START TEST fio_dif_digest 00:37:23.817 ************************************ 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 bdev_null0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 [2024-11-20 17:20:15.821037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:23.817 { 00:37:23.817 "params": { 00:37:23.817 "name": "Nvme$subsystem", 00:37:23.817 "trtype": "$TEST_TRANSPORT", 00:37:23.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:23.817 "adrfam": "ipv4", 00:37:23.817 "trsvcid": "$NVMF_PORT", 00:37:23.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:23.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:23.817 "hdgst": ${hdgst:-false}, 00:37:23.817 "ddgst": ${ddgst:-false} 00:37:23.817 }, 00:37:23.817 "method": "bdev_nvme_attach_controller" 00:37:23.817 } 00:37:23.817 EOF 00:37:23.817 )") 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:23.817 "params": { 00:37:23.817 "name": "Nvme0", 00:37:23.817 "trtype": "tcp", 00:37:23.817 "traddr": "10.0.0.2", 00:37:23.817 "adrfam": "ipv4", 00:37:23.817 "trsvcid": "4420", 00:37:23.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.817 "hdgst": true, 00:37:23.817 "ddgst": true 00:37:23.817 }, 00:37:23.817 "method": "bdev_nvme_attach_controller" 00:37:23.817 }' 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.817 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:23.818 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:23.818 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:23.818 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:23.818 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:23.818 17:20:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.077 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:24.077 ... 00:37:24.077 fio-3.35 00:37:24.077 Starting 3 threads 00:37:36.419 00:37:36.419 filename0: (groupid=0, jobs=1): err= 0: pid=2276706: Wed Nov 20 17:20:26 2024 00:37:36.419 read: IOPS=361, BW=45.2MiB/s (47.3MB/s)(452MiB/10005msec) 00:37:36.419 slat (nsec): min=8322, max=76410, avg=10490.52, stdev=2948.62 00:37:36.419 clat (usec): min=5485, max=12496, avg=8291.73, stdev=766.96 00:37:36.419 lat (usec): min=5500, max=12505, avg=8302.22, stdev=766.71 00:37:36.419 clat percentiles (usec): 00:37:36.419 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7701], 00:37:36.419 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:37:36.419 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9634], 00:37:36.419 | 99.00th=[10421], 99.50th=[10683], 99.90th=[11863], 99.95th=[12387], 00:37:36.419 | 99.99th=[12518] 00:37:36.419 bw ( KiB/s): min=40448, max=47872, per=40.10%, avg=46196.95, stdev=2199.38, samples=19 00:37:36.419 iops : min= 316, max= 374, avg=360.89, stdev=17.23, samples=19 00:37:36.419 lat (msec) : 10=97.59%, 20=2.41% 00:37:36.419 cpu : usr=95.42%, sys=4.26%, ctx=38, majf=0, minf=252 00:37:36.419 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.419 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.419 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:36.419 filename0: (groupid=0, jobs=1): err= 0: pid=2276707: Wed Nov 20 17:20:26 2024 00:37:36.419 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10048msec) 00:37:36.419 slat (nsec): min=5966, max=37041, avg=8266.91, stdev=1921.87 00:37:36.419 clat (usec): min=6061, max=53245, avg=10995.37, stdev=1411.49 00:37:36.419 lat (usec): min=6068, max=53253, avg=11003.63, stdev=1411.45 00:37:36.419 clat percentiles (usec): 00:37:36.419 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:37:36.419 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:37:36.419 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:37:36.419 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14091], 99.95th=[47973], 00:37:36.419 | 99.99th=[53216] 00:37:36.419 bw ( KiB/s): min=33792, max=37120, per=30.37%, avg=34982.40, stdev=852.20, samples=20 00:37:36.419 iops : min= 264, max= 290, avg=273.30, stdev= 6.66, samples=20 00:37:36.419 lat (msec) : 10=13.16%, 20=86.76%, 50=0.04%, 100=0.04% 00:37:36.419 cpu : usr=94.42%, sys=5.32%, ctx=32, majf=0, minf=140 00:37:36.419 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.419 issued rwts: total=2735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.419 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:36.419 filename0: (groupid=0, jobs=1): err= 0: pid=2276708: Wed Nov 20 17:20:26 2024 00:37:36.419 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(337MiB/10046msec) 00:37:36.419 slat (nsec): min=5859, max=45206, avg=8082.23, stdev=2039.32 00:37:36.419 clat (usec): min=7718, max=53296, avg=11161.40, stdev=1969.62 00:37:36.419 lat (usec): min=7725, max=53305, avg=11169.48, stdev=1969.62 00:37:36.419 clat percentiles (usec): 00:37:36.419 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:37:36.419 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:37:36.419 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12387], 95.00th=[12780], 00:37:36.419 | 99.00th=[13435], 99.50th=[13960], 99.90th=[51643], 99.95th=[52691], 00:37:36.419 | 99.99th=[53216] 00:37:36.419 bw ( KiB/s): min=32256, max=36864, per=29.91%, avg=34457.60, stdev=872.70, samples=20 00:37:36.419 iops : min= 252, max= 288, avg=269.20, stdev= 6.82, samples=20 00:37:36.419 lat (msec) : 10=11.62%, 20=88.20%, 50=0.04%, 100=0.15% 00:37:36.419 cpu : usr=94.58%, sys=5.16%, ctx=16, majf=0, minf=122 00:37:36.419 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.419 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.419 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:36.419 00:37:36.419 Run status group 0 (all jobs): 00:37:36.419 READ: bw=112MiB/s (118MB/s), 33.5MiB/s-45.2MiB/s (35.1MB/s-47.3MB/s), io=1130MiB (1185MB), run=10005-10048msec 00:37:36.419 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:36.419 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:36.419 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.420 00:37:36.420 real 0m11.207s 00:37:36.420 user 0m43.074s 00:37:36.420 sys 0m1.771s 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.420 17:20:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:36.420 ************************************ 00:37:36.420 END TEST fio_dif_digest 00:37:36.420 ************************************ 00:37:36.420 17:20:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:36.420 17:20:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:36.420 rmmod nvme_tcp 00:37:36.420 rmmod nvme_fabrics 00:37:36.420 rmmod nvme_keyring 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2266546 ']' 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2266546 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2266546 ']' 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2266546 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266546 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266546' 00:37:36.420 killing process with pid 2266546 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2266546 00:37:36.420 17:20:27 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2266546 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:36.420 17:20:27 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:38.967 Waiting for block devices as requested 00:37:38.967 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:38.967 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:38.967 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:38.967 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:38.967 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:38.967 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:39.228 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:39.228 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:39.228 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:39.489 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:39.489 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:39.489 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:39.750 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:39.750 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:39.750 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:40.011 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:40.011 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:40.273 17:20:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:40.273 17:20:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:40.273 17:20:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:40.274 17:20:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:40.274 17:20:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:40.274 17:20:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:40.274 17:20:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:40.274 17:20:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:40.274 17:20:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.274 17:20:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:40.274 17:20:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.821 17:20:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:42.821 00:37:42.821 real 1m18.357s 00:37:42.821 user 7m53.340s 00:37:42.821 sys 0m22.514s 00:37:42.821 17:20:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:42.821 17:20:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:42.821 ************************************ 00:37:42.821 END TEST nvmf_dif 00:37:42.821 ************************************ 00:37:42.821 17:20:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:42.821 17:20:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:42.821 17:20:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:42.821 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:37:42.821 ************************************ 00:37:42.821 START TEST nvmf_abort_qd_sizes 00:37:42.821 ************************************ 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:42.821 * Looking for test storage... 00:37:42.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.821 --rc genhtml_branch_coverage=1 00:37:42.821 --rc genhtml_function_coverage=1 00:37:42.821 --rc genhtml_legend=1 00:37:42.821 --rc geninfo_all_blocks=1 00:37:42.821 --rc geninfo_unexecuted_blocks=1 00:37:42.821 00:37:42.821 ' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.821 --rc genhtml_branch_coverage=1 00:37:42.821 --rc genhtml_function_coverage=1 00:37:42.821 --rc genhtml_legend=1 00:37:42.821 --rc geninfo_all_blocks=1 00:37:42.821 --rc geninfo_unexecuted_blocks=1 00:37:42.821 00:37:42.821 ' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.821 --rc genhtml_branch_coverage=1 00:37:42.821 --rc genhtml_function_coverage=1 00:37:42.821 --rc genhtml_legend=1 00:37:42.821 --rc geninfo_all_blocks=1 00:37:42.821 --rc geninfo_unexecuted_blocks=1 00:37:42.821 00:37:42.821 ' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.821 --rc genhtml_branch_coverage=1 00:37:42.821 --rc genhtml_function_coverage=1 00:37:42.821 --rc genhtml_legend=1 00:37:42.821 --rc geninfo_all_blocks=1 00:37:42.821 --rc geninfo_unexecuted_blocks=1 00:37:42.821 00:37:42.821 ' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:42.821 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:42.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:42.822 17:20:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:50.969 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:50.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:50.970 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:50.970 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:50.970 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:50.970 17:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:50.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:37:50.970 00:37:50.970 --- 10.0.0.2 ping statistics --- 00:37:50.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.970 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:37:50.970 00:37:50.970 --- 10.0.0.1 ping statistics --- 00:37:50.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.970 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:50.970 17:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:53.514 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.514 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.514 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.514 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.514 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.775 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2286153 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2286153 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2286153 ']' 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.346 17:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.346 [2024-11-20 17:20:46.341025] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:37:54.346 [2024-11-20 17:20:46.341087] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.346 [2024-11-20 17:20:46.441018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:54.346 [2024-11-20 17:20:46.496417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.346 [2024-11-20 17:20:46.496472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.346 [2024-11-20 17:20:46.496480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.346 [2024-11-20 17:20:46.496487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.346 [2024-11-20 17:20:46.496495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.346 [2024-11-20 17:20:46.498503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.346 [2024-11-20 17:20:46.498663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.346 [2024-11-20 17:20:46.498823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.346 [2024-11-20 17:20:46.498823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:55.287 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.288 17:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:55.288 ************************************ 00:37:55.288 START TEST spdk_target_abort 00:37:55.288 ************************************ 00:37:55.288 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:55.288 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:55.288 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:55.288 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.288 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.548 spdk_targetn1 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.548 [2024-11-20 17:20:47.560176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.548 [2024-11-20 17:20:47.612491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:55.548 17:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:55.809 [2024-11-20 17:20:47.812699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:528 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.812732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.820640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:760 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0061 p:1 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.822937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:896 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.822954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.836622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1288 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a2 p:1 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.844665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1552 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.844684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00c4 p:1 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.868634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2352 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.868655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.869357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2408 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.869373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.908674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3648 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00cb p:0 m:0 dnr:0 00:37:55.809 [2024-11-20 17:20:47.918239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4008 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:55.809 [2024-11-20 17:20:47.918258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:37:59.113 Initializing NVMe Controllers 00:37:59.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:59.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:59.113 Initialization complete. Launching workers. 00:37:59.113 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11201, failed: 9 00:37:59.113 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2131, failed to submit 9079 00:37:59.113 success 756, unsuccessful 1375, failed 0 00:37:59.113 17:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:59.113 17:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:59.113 [2024-11-20 17:20:51.046398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:624 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.046438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:37:59.113 [2024-11-20 17:20:51.069139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1176 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.069170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0098 p:1 m:0 dnr:0 00:37:59.113 [2024-11-20 17:20:51.092319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:1648 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.092342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00da p:1 m:0 dnr:0 00:37:59.113 [2024-11-20 17:20:51.099649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1904 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.099670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00f4 p:1 m:0 dnr:0 00:37:59.113 [2024-11-20 17:20:51.115187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:2176 len:8 PRP1 0x200004e44000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.115209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:59.113 [2024-11-20 17:20:51.153214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3136 len:8 PRP1 0x200004e60000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.153237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0094 p:0 m:0 dnr:0 00:37:59.113 [2024-11-20 17:20:51.168424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3616 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:59.113 [2024-11-20 17:20:51.168445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00c6 p:0 m:0 dnr:0 00:38:01.659 [2024-11-20 17:20:53.709234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:61472 len:8 PRP1 0x200004e58000 PRP2 0x0 00:38:01.659 [2024-11-20 17:20:53.709281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:38:02.231 Initializing NVMe Controllers 00:38:02.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:02.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:02.231 Initialization complete. Launching workers. 00:38:02.231 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8571, failed: 8 00:38:02.231 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7345 00:38:02.231 success 320, unsuccessful 914, failed 0 00:38:02.231 17:20:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:02.231 17:20:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.492 [2024-11-20 17:20:54.624758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:153 nsid:1 lba:37576 len:8 PRP1 0x200004aee000 PRP2 0x0 00:38:02.492 [2024-11-20 17:20:54.624786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:153 cdw0:0 sqhd:0024 p:1 m:0 dnr:0 00:38:04.401 [2024-11-20 17:20:56.220738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:148 nsid:1 lba:227584 len:8 PRP1 0x200004ae0000 PRP2 0x0 00:38:04.402 [2024-11-20 17:20:56.220764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:148 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:38:05.342 Initializing NVMe Controllers 00:38:05.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:05.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:05.342 Initialization complete. Launching workers. 00:38:05.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44830, failed: 2 00:38:05.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2471, failed to submit 42361 00:38:05.342 success 603, unsuccessful 1868, failed 0 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.342 17:20:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2286153 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2286153 ']' 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2286153 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286153 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286153' 00:38:07.253 killing process with pid 2286153 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2286153 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2286153 00:38:07.253 00:38:07.253 real 0m12.114s 00:38:07.253 user 0m49.252s 00:38:07.253 sys 0m2.080s 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.253 17:20:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:07.253 ************************************ 00:38:07.253 END TEST spdk_target_abort 00:38:07.253 ************************************ 00:38:07.253 17:20:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:07.253 17:20:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:07.253 17:20:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.253 17:20:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:07.514 ************************************ 00:38:07.514 START TEST kernel_target_abort 00:38:07.514 ************************************ 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:07.514 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:07.515 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:07.515 17:20:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:10.817 Waiting for block devices as requested 00:38:10.817 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:10.817 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:10.817 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:11.078 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:11.078 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:11.078 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:11.338 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.338 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:11.338 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:11.599 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:11.599 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:11.859 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:11.859 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:11.859 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:12.120 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:12.120 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:12.120 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:12.381 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:12.642 No valid GPT data, bailing 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:12.642 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:12.642 00:38:12.642 Discovery Log Number of Records 2, Generation counter 2 00:38:12.642 =====Discovery Log Entry 0====== 00:38:12.642 trtype: tcp 00:38:12.642 adrfam: ipv4 00:38:12.642 subtype: current discovery subsystem 00:38:12.642 treq: not specified, sq flow control disable supported 00:38:12.642 portid: 1 00:38:12.643 trsvcid: 4420 00:38:12.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:12.643 traddr: 10.0.0.1 00:38:12.643 eflags: none 00:38:12.643 sectype: none 00:38:12.643 =====Discovery Log Entry 1====== 00:38:12.643 trtype: tcp 00:38:12.643 adrfam: ipv4 00:38:12.643 subtype: nvme subsystem 00:38:12.643 treq: not specified, sq flow control disable supported 00:38:12.643 portid: 1 00:38:12.643 trsvcid: 4420 00:38:12.643 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:12.643 traddr: 10.0.0.1 00:38:12.643 eflags: none 00:38:12.643 sectype: none 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.643 17:21:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:15.945 Initializing NVMe Controllers 00:38:15.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:15.945 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:15.945 Initialization complete. Launching workers. 00:38:15.945 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67452, failed: 0 00:38:15.945 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67452, failed to submit 0 00:38:15.946 success 0, unsuccessful 67452, failed 0 00:38:15.946 17:21:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:15.946 17:21:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.247 Initializing NVMe Controllers 00:38:19.247 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:19.247 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:19.247 Initialization complete. Launching workers. 00:38:19.247 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117026, failed: 0 00:38:19.247 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29446, failed to submit 87580 00:38:19.247 success 0, unsuccessful 29446, failed 0 00:38:19.247 17:21:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:19.247 17:21:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:22.546 Initializing NVMe Controllers 00:38:22.546 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:22.546 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:22.546 Initialization complete. Launching workers. 00:38:22.546 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146184, failed: 0 00:38:22.546 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36598, failed to submit 109586 00:38:22.546 success 0, unsuccessful 36598, failed 0 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:22.546 17:21:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:25.848 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:25.848 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:27.233 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:27.805 00:38:27.805 real 0m20.297s 00:38:27.805 user 0m10.018s 00:38:27.805 sys 0m5.928s 00:38:27.805 17:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.805 17:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.805 ************************************ 00:38:27.805 END TEST kernel_target_abort 00:38:27.805 ************************************ 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.805 rmmod nvme_tcp 00:38:27.805 rmmod nvme_fabrics 00:38:27.805 rmmod nvme_keyring 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2286153 ']' 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2286153 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2286153 ']' 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2286153 00:38:27.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2286153) - No such process 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2286153 is not found' 00:38:27.805 Process with pid 2286153 is not found 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:27.805 17:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:31.108 Waiting for block devices as requested 00:38:31.108 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:31.369 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:31.370 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:31.370 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:31.631 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:31.631 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:31.631 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:31.631 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:32.015 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:32.015 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:32.015 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:32.348 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:32.348 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:32.348 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:32.348 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:32.609 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:32.609 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:32.869 17:21:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:35.411 17:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:35.411 00:38:35.412 real 0m52.465s 00:38:35.412 user 1m4.870s 00:38:35.412 sys 0m19.054s 00:38:35.412 17:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.412 17:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:35.412 ************************************ 00:38:35.412 END TEST nvmf_abort_qd_sizes 00:38:35.412 ************************************ 00:38:35.412 17:21:27 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:35.412 17:21:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:35.412 17:21:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.412 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:38:35.412 ************************************ 00:38:35.412 START TEST keyring_file 00:38:35.412 ************************************ 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:35.412 * Looking for test storage... 00:38:35.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.412 --rc genhtml_branch_coverage=1 00:38:35.412 --rc genhtml_function_coverage=1 00:38:35.412 --rc genhtml_legend=1 00:38:35.412 --rc geninfo_all_blocks=1 00:38:35.412 --rc geninfo_unexecuted_blocks=1 00:38:35.412 00:38:35.412 ' 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.412 --rc genhtml_branch_coverage=1 00:38:35.412 --rc genhtml_function_coverage=1 00:38:35.412 --rc genhtml_legend=1 00:38:35.412 --rc geninfo_all_blocks=1 00:38:35.412 --rc geninfo_unexecuted_blocks=1 00:38:35.412 00:38:35.412 ' 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.412 --rc genhtml_branch_coverage=1 00:38:35.412 --rc genhtml_function_coverage=1 00:38:35.412 --rc genhtml_legend=1 00:38:35.412 --rc geninfo_all_blocks=1 00:38:35.412 --rc geninfo_unexecuted_blocks=1 00:38:35.412 00:38:35.412 ' 00:38:35.412 17:21:27 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.412 --rc genhtml_branch_coverage=1 00:38:35.412 --rc genhtml_function_coverage=1 00:38:35.412 --rc genhtml_legend=1 00:38:35.412 --rc geninfo_all_blocks=1 00:38:35.412 --rc geninfo_unexecuted_blocks=1 00:38:35.412 00:38:35.412 ' 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:35.412 17:21:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.412 17:21:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.412 17:21:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 17:21:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 17:21:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 17:21:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:35.412 17:21:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:35.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:35.412 17:21:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:35.412 17:21:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:35.412 17:21:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:35.412 17:21:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:35.412 17:21:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ENMf7Db6z0 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ENMf7Db6z0 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ENMf7Db6z0 00:38:35.413 17:21:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ENMf7Db6z0 00:38:35.413 17:21:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NnlH0Gwdmc 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:35.413 17:21:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NnlH0Gwdmc 00:38:35.413 17:21:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NnlH0Gwdmc 00:38:35.413 17:21:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NnlH0Gwdmc 00:38:35.413 17:21:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=2297186 00:38:35.413 17:21:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2297186 00:38:35.413 17:21:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:35.413 17:21:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2297186 ']' 00:38:35.413 17:21:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.413 17:21:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.413 17:21:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.413 17:21:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.413 17:21:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.413 [2024-11-20 17:21:27.518438] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:38:35.413 [2024-11-20 17:21:27.518513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297186 ] 00:38:35.675 [2024-11-20 17:21:27.612174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.675 [2024-11-20 17:21:27.666319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:36.245 17:21:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:36.245 [2024-11-20 17:21:28.335045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.245 null0 00:38:36.245 [2024-11-20 17:21:28.367091] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:36.245 [2024-11-20 17:21:28.367502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.245 17:21:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:36.245 17:21:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:36.246 [2024-11-20 17:21:28.399163] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:36.246 request: 00:38:36.246 { 00:38:36.246 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:36.246 "secure_channel": false, 00:38:36.246 "listen_address": { 00:38:36.246 "trtype": "tcp", 00:38:36.246 "traddr": "127.0.0.1", 00:38:36.246 "trsvcid": "4420" 00:38:36.246 }, 00:38:36.246 "method": "nvmf_subsystem_add_listener", 00:38:36.246 "req_id": 1 00:38:36.246 } 00:38:36.246 Got JSON-RPC error response 00:38:36.246 response: 00:38:36.246 { 00:38:36.246 "code": -32602, 00:38:36.246 "message": "Invalid parameters" 00:38:36.246 } 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:36.246 17:21:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=2297262 00:38:36.246 17:21:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2297262 /var/tmp/bperf.sock 00:38:36.246 17:21:28 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2297262 ']' 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:36.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:36.246 17:21:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:36.505 [2024-11-20 17:21:28.458714] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:38:36.505 [2024-11-20 17:21:28.458763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297262 ] 00:38:36.505 [2024-11-20 17:21:28.543378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.505 [2024-11-20 17:21:28.580301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.077 17:21:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.077 17:21:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:37.077 17:21:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:37.077 17:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:37.338 17:21:29 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NnlH0Gwdmc 00:38:37.338 17:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NnlH0Gwdmc 00:38:37.598 17:21:29 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:37.598 17:21:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:37.598 17:21:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.598 17:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.598 17:21:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.598 17:21:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ENMf7Db6z0 == \/\t\m\p\/\t\m\p\.\E\N\M\f\7\D\b\6\z\0 ]] 00:38:37.598 17:21:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:37.598 17:21:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:37.598 17:21:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.598 17:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.598 17:21:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:37.858 17:21:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.NnlH0Gwdmc == \/\t\m\p\/\t\m\p\.\N\n\l\H\0\G\w\d\m\c ]] 00:38:37.858 17:21:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:37.858 17:21:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.858 17:21:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.858 17:21:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.858 17:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.858 17:21:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.118 17:21:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:38.118 17:21:30 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:38.118 17:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.118 17:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.118 17:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.118 17:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.118 17:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.380 17:21:30 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:38.380 17:21:30 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:38.380 17:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:38.380 [2024-11-20 17:21:30.481834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:38.642 nvme0n1 00:38:38.642 17:21:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.642 17:21:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:38.642 17:21:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.642 17:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.903 17:21:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:38.903 17:21:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:39.164 Running I/O for 1 seconds... 00:38:40.103 16516.00 IOPS, 64.52 MiB/s 00:38:40.103 Latency(us) 00:38:40.103 [2024-11-20T16:21:32.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.103 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:40.103 nvme0n1 : 1.01 16560.63 64.69 0.00 0.00 7712.16 3044.69 13981.01 00:38:40.103 [2024-11-20T16:21:32.279Z] =================================================================================================================== 00:38:40.103 [2024-11-20T16:21:32.279Z] Total : 16560.63 64.69 0.00 0.00 7712.16 3044.69 13981.01 00:38:40.103 { 00:38:40.103 "results": [ 00:38:40.103 { 00:38:40.103 "job": "nvme0n1", 00:38:40.103 "core_mask": "0x2", 00:38:40.103 "workload": "randrw", 00:38:40.103 "percentage": 50, 00:38:40.103 "status": "finished", 00:38:40.103 "queue_depth": 128, 00:38:40.103 "io_size": 4096, 00:38:40.103 "runtime": 1.005155, 00:38:40.103 "iops": 16560.629952594376, 00:38:40.103 "mibps": 64.68996075232178, 00:38:40.103 "io_failed": 0, 00:38:40.103 "io_timeout": 0, 00:38:40.103 "avg_latency_us": 7712.157430413713, 00:38:40.103 "min_latency_us": 3044.693333333333, 00:38:40.103 "max_latency_us": 13981.013333333334 00:38:40.103 } 00:38:40.103 ], 00:38:40.103 "core_count": 1 00:38:40.103 } 00:38:40.103 17:21:32 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:40.103 17:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:40.362 17:21:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.362 17:21:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:40.362 17:21:32 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:40.362 17:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.622 17:21:32 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:40.622 17:21:32 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.622 17:21:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.622 17:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.882 [2024-11-20 17:21:32.801765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:40.882 [2024-11-20 17:21:32.802520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dadf0 (107): Transport endpoint is not connected 00:38:40.882 [2024-11-20 17:21:32.803516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dadf0 (9): Bad file descriptor 00:38:40.882 [2024-11-20 17:21:32.804519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:40.882 [2024-11-20 17:21:32.804527] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:40.882 [2024-11-20 17:21:32.804533] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:40.882 [2024-11-20 17:21:32.804540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:40.882 request: 00:38:40.882 { 00:38:40.882 "name": "nvme0", 00:38:40.882 "trtype": "tcp", 00:38:40.882 "traddr": "127.0.0.1", 00:38:40.882 "adrfam": "ipv4", 00:38:40.882 "trsvcid": "4420", 00:38:40.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.882 "prchk_reftag": false, 00:38:40.882 "prchk_guard": false, 00:38:40.882 "hdgst": false, 00:38:40.882 "ddgst": false, 00:38:40.882 "psk": "key1", 00:38:40.882 "allow_unrecognized_csi": false, 00:38:40.882 "method": "bdev_nvme_attach_controller", 00:38:40.882 "req_id": 1 00:38:40.882 } 00:38:40.882 Got JSON-RPC error response 00:38:40.882 response: 00:38:40.882 { 00:38:40.882 "code": -5, 00:38:40.882 "message": "Input/output error" 00:38:40.882 } 00:38:40.882 17:21:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:40.882 17:21:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:40.882 17:21:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:40.882 17:21:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:40.883 17:21:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:40.883 17:21:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.883 17:21:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.883 17:21:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.883 17:21:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.883 17:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.883 17:21:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:40.883 17:21:33 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:40.883 17:21:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:40.883 17:21:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.883 17:21:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.883 17:21:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:40.883 17:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.143 17:21:33 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:41.143 17:21:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:41.143 17:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:41.403 17:21:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:41.403 17:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:41.403 17:21:33 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:41.403 17:21:33 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:41.403 17:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.663 17:21:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:41.663 17:21:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ENMf7Db6z0 00:38:41.663 17:21:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.663 17:21:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:41.663 17:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:41.923 [2024-11-20 17:21:33.901366] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ENMf7Db6z0': 0100660 00:38:41.923 [2024-11-20 17:21:33.901383] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:41.923 request: 00:38:41.923 { 00:38:41.923 "name": "key0", 00:38:41.923 "path": "/tmp/tmp.ENMf7Db6z0", 00:38:41.923 "method": "keyring_file_add_key", 00:38:41.923 "req_id": 1 00:38:41.923 } 00:38:41.923 Got JSON-RPC error response 00:38:41.923 response: 00:38:41.923 { 00:38:41.923 "code": -1, 00:38:41.923 "message": "Operation not permitted" 00:38:41.923 } 00:38:41.923 17:21:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:41.923 17:21:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.923 17:21:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.923 17:21:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.923 17:21:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ENMf7Db6z0 00:38:41.923 17:21:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:41.923 17:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ENMf7Db6z0 00:38:42.184 17:21:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ENMf7Db6z0 00:38:42.184 17:21:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:42.184 17:21:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.184 17:21:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.184 17:21:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.184 17:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.184 17:21:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.184 17:21:34 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:42.184 17:21:34 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:42.184 17:21:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.184 17:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.443 [2024-11-20 17:21:34.474821] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ENMf7Db6z0': No such file or directory 00:38:42.443 [2024-11-20 17:21:34.474835] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:42.443 [2024-11-20 17:21:34.474848] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:42.443 [2024-11-20 17:21:34.474853] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:42.443 [2024-11-20 17:21:34.474859] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:42.443 [2024-11-20 17:21:34.474864] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:42.443 request: 00:38:42.443 { 00:38:42.443 "name": "nvme0", 00:38:42.443 "trtype": "tcp", 00:38:42.443 "traddr": "127.0.0.1", 00:38:42.443 "adrfam": "ipv4", 00:38:42.443 "trsvcid": "4420", 00:38:42.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:42.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:42.443 "prchk_reftag": false, 00:38:42.443 "prchk_guard": false, 00:38:42.443 "hdgst": false, 00:38:42.443 "ddgst": false, 00:38:42.443 "psk": "key0", 00:38:42.443 "allow_unrecognized_csi": false, 00:38:42.443 "method": "bdev_nvme_attach_controller", 00:38:42.443 "req_id": 1 00:38:42.443 } 00:38:42.443 Got JSON-RPC error response 00:38:42.443 response: 00:38:42.443 { 00:38:42.443 "code": -19, 00:38:42.443 "message": "No such device" 00:38:42.443 } 00:38:42.443 17:21:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:42.443 17:21:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:42.443 17:21:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:42.443 17:21:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:42.443 17:21:34 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:42.443 17:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:42.705 17:21:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IB8la5aX3b 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:42.705 17:21:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:42.705 17:21:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:42.705 17:21:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:42.705 17:21:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:42.705 17:21:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:42.705 17:21:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IB8la5aX3b 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IB8la5aX3b 00:38:42.705 17:21:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.IB8la5aX3b 00:38:42.705 17:21:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IB8la5aX3b 00:38:42.705 17:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IB8la5aX3b 00:38:42.965 17:21:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.965 17:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.965 nvme0n1 00:38:43.225 17:21:35 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:43.225 17:21:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:43.225 17:21:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.225 17:21:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.225 17:21:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.225 17:21:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.225 17:21:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:43.225 17:21:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:43.225 17:21:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:43.485 17:21:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:43.485 17:21:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:43.485 17:21:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.485 17:21:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.485 17:21:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.746 17:21:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:43.746 17:21:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:43.746 17:21:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:43.746 17:21:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.746 17:21:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.746 17:21:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.746 17:21:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.746 17:21:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:43.746 17:21:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:43.746 17:21:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:44.006 17:21:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:44.006 17:21:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:44.006 17:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.268 17:21:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:44.268 17:21:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IB8la5aX3b 00:38:44.268 17:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IB8la5aX3b 00:38:44.268 17:21:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NnlH0Gwdmc 00:38:44.269 17:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NnlH0Gwdmc 00:38:44.529 17:21:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.529 17:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.790 nvme0n1 00:38:44.790 17:21:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:44.790 17:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:45.051 17:21:37 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:45.051 "subsystems": [ 00:38:45.051 { 00:38:45.051 "subsystem": "keyring", 00:38:45.051 "config": [ 00:38:45.051 { 00:38:45.051 "method": "keyring_file_add_key", 00:38:45.051 "params": { 00:38:45.051 "name": "key0", 00:38:45.051 "path": "/tmp/tmp.IB8la5aX3b" 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "keyring_file_add_key", 00:38:45.051 "params": { 00:38:45.051 "name": "key1", 00:38:45.051 "path": "/tmp/tmp.NnlH0Gwdmc" 00:38:45.051 } 00:38:45.051 } 00:38:45.051 ] 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "subsystem": "iobuf", 00:38:45.051 "config": [ 00:38:45.051 { 00:38:45.051 "method": "iobuf_set_options", 00:38:45.051 "params": { 00:38:45.051 "small_pool_count": 8192, 00:38:45.051 "large_pool_count": 1024, 00:38:45.051 "small_bufsize": 8192, 00:38:45.051 "large_bufsize": 135168, 00:38:45.051 "enable_numa": false 00:38:45.051 } 00:38:45.051 } 00:38:45.051 ] 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "subsystem": "sock", 00:38:45.051 "config": [ 00:38:45.051 { 00:38:45.051 "method": "sock_set_default_impl", 00:38:45.051 "params": { 00:38:45.051 "impl_name": "posix" 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "sock_impl_set_options", 00:38:45.051 "params": { 00:38:45.051 "impl_name": "ssl", 00:38:45.051 "recv_buf_size": 4096, 00:38:45.051 "send_buf_size": 4096, 00:38:45.051 "enable_recv_pipe": true, 00:38:45.051 "enable_quickack": false, 00:38:45.051 "enable_placement_id": 0, 00:38:45.051 "enable_zerocopy_send_server": true, 00:38:45.051 "enable_zerocopy_send_client": false, 00:38:45.051 "zerocopy_threshold": 0, 00:38:45.051 "tls_version": 0, 00:38:45.051 "enable_ktls": false 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "sock_impl_set_options", 00:38:45.051 "params": { 00:38:45.051 "impl_name": "posix", 00:38:45.051 "recv_buf_size": 2097152, 00:38:45.051 "send_buf_size": 2097152, 00:38:45.051 "enable_recv_pipe": true, 00:38:45.051 "enable_quickack": false, 00:38:45.051 "enable_placement_id": 0, 00:38:45.051 "enable_zerocopy_send_server": true, 00:38:45.051 "enable_zerocopy_send_client": false, 00:38:45.051 "zerocopy_threshold": 0, 00:38:45.051 "tls_version": 0, 00:38:45.051 "enable_ktls": false 00:38:45.051 } 00:38:45.051 } 00:38:45.051 ] 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "subsystem": "vmd", 00:38:45.051 "config": [] 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "subsystem": "accel", 00:38:45.051 "config": [ 00:38:45.051 { 00:38:45.051 "method": "accel_set_options", 00:38:45.051 "params": { 00:38:45.051 "small_cache_size": 128, 00:38:45.051 "large_cache_size": 16, 00:38:45.051 "task_count": 2048, 00:38:45.051 "sequence_count": 2048, 00:38:45.051 "buf_count": 2048 00:38:45.051 } 00:38:45.051 } 00:38:45.051 ] 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "subsystem": "bdev", 00:38:45.051 "config": [ 00:38:45.051 { 00:38:45.051 "method": "bdev_set_options", 00:38:45.051 "params": { 00:38:45.051 "bdev_io_pool_size": 65535, 00:38:45.051 "bdev_io_cache_size": 256, 00:38:45.051 "bdev_auto_examine": true, 00:38:45.051 "iobuf_small_cache_size": 128, 00:38:45.051 "iobuf_large_cache_size": 16 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "bdev_raid_set_options", 00:38:45.051 "params": { 00:38:45.051 "process_window_size_kb": 1024, 00:38:45.051 "process_max_bandwidth_mb_sec": 0 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "bdev_iscsi_set_options", 00:38:45.051 "params": { 00:38:45.051 "timeout_sec": 30 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "bdev_nvme_set_options", 00:38:45.051 "params": { 00:38:45.051 "action_on_timeout": "none", 00:38:45.051 "timeout_us": 0, 00:38:45.051 "timeout_admin_us": 0, 00:38:45.051 "keep_alive_timeout_ms": 10000, 00:38:45.051 "arbitration_burst": 0, 00:38:45.051 "low_priority_weight": 0, 00:38:45.051 "medium_priority_weight": 0, 00:38:45.051 "high_priority_weight": 0, 00:38:45.051 "nvme_adminq_poll_period_us": 10000, 00:38:45.051 "nvme_ioq_poll_period_us": 0, 00:38:45.051 "io_queue_requests": 512, 00:38:45.051 "delay_cmd_submit": true, 00:38:45.051 "transport_retry_count": 4, 00:38:45.051 "bdev_retry_count": 3, 00:38:45.051 "transport_ack_timeout": 0, 00:38:45.051 "ctrlr_loss_timeout_sec": 0, 00:38:45.051 "reconnect_delay_sec": 0, 00:38:45.051 "fast_io_fail_timeout_sec": 0, 00:38:45.051 "disable_auto_failback": false, 00:38:45.051 "generate_uuids": false, 00:38:45.051 "transport_tos": 0, 00:38:45.051 "nvme_error_stat": false, 00:38:45.051 "rdma_srq_size": 0, 00:38:45.051 "io_path_stat": false, 00:38:45.051 "allow_accel_sequence": false, 00:38:45.051 "rdma_max_cq_size": 0, 00:38:45.051 "rdma_cm_event_timeout_ms": 0, 00:38:45.051 "dhchap_digests": [ 00:38:45.051 "sha256", 00:38:45.051 "sha384", 00:38:45.051 "sha512" 00:38:45.051 ], 00:38:45.051 "dhchap_dhgroups": [ 00:38:45.051 "null", 00:38:45.051 "ffdhe2048", 00:38:45.051 "ffdhe3072", 00:38:45.051 "ffdhe4096", 00:38:45.051 "ffdhe6144", 00:38:45.051 "ffdhe8192" 00:38:45.051 ] 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "bdev_nvme_attach_controller", 00:38:45.051 "params": { 00:38:45.051 "name": "nvme0", 00:38:45.051 "trtype": "TCP", 00:38:45.051 "adrfam": "IPv4", 00:38:45.051 "traddr": "127.0.0.1", 00:38:45.051 "trsvcid": "4420", 00:38:45.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.051 "prchk_reftag": false, 00:38:45.051 "prchk_guard": false, 00:38:45.051 "ctrlr_loss_timeout_sec": 0, 00:38:45.051 "reconnect_delay_sec": 0, 00:38:45.051 "fast_io_fail_timeout_sec": 0, 00:38:45.051 "psk": "key0", 00:38:45.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.051 "hdgst": false, 00:38:45.051 "ddgst": false, 00:38:45.051 "multipath": "multipath" 00:38:45.051 } 00:38:45.051 }, 00:38:45.051 { 00:38:45.051 "method": "bdev_nvme_set_hotplug", 00:38:45.051 "params": { 00:38:45.051 "period_us": 100000, 00:38:45.052 "enable": false 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "bdev_wait_for_examine" 00:38:45.052 } 00:38:45.052 ] 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "subsystem": "nbd", 00:38:45.052 "config": [] 00:38:45.052 } 00:38:45.052 ] 00:38:45.052 }' 00:38:45.052 17:21:37 keyring_file -- keyring/file.sh@115 -- # killprocess 2297262 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2297262 ']' 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2297262 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297262 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297262' 00:38:45.052 killing process with pid 2297262 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@973 -- # kill 2297262 00:38:45.052 Received shutdown signal, test time was about 1.000000 seconds 00:38:45.052 00:38:45.052 Latency(us) 00:38:45.052 [2024-11-20T16:21:37.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.052 [2024-11-20T16:21:37.228Z] =================================================================================================================== 00:38:45.052 [2024-11-20T16:21:37.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@978 -- # wait 2297262 00:38:45.052 17:21:37 keyring_file -- keyring/file.sh@118 -- # bperfpid=2299072 00:38:45.052 17:21:37 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2299072 /var/tmp/bperf.sock 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2299072 ']' 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.052 17:21:37 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:45.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.052 17:21:37 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:45.052 "subsystems": [ 00:38:45.052 { 00:38:45.052 "subsystem": "keyring", 00:38:45.052 "config": [ 00:38:45.052 { 00:38:45.052 "method": "keyring_file_add_key", 00:38:45.052 "params": { 00:38:45.052 "name": "key0", 00:38:45.052 "path": "/tmp/tmp.IB8la5aX3b" 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "keyring_file_add_key", 00:38:45.052 "params": { 00:38:45.052 "name": "key1", 00:38:45.052 "path": "/tmp/tmp.NnlH0Gwdmc" 00:38:45.052 } 00:38:45.052 } 00:38:45.052 ] 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "subsystem": "iobuf", 00:38:45.052 "config": [ 00:38:45.052 { 00:38:45.052 "method": "iobuf_set_options", 00:38:45.052 "params": { 00:38:45.052 "small_pool_count": 8192, 00:38:45.052 "large_pool_count": 1024, 00:38:45.052 "small_bufsize": 8192, 00:38:45.052 "large_bufsize": 135168, 00:38:45.052 "enable_numa": false 00:38:45.052 } 00:38:45.052 } 00:38:45.052 ] 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "subsystem": "sock", 00:38:45.052 "config": [ 00:38:45.052 { 00:38:45.052 "method": "sock_set_default_impl", 00:38:45.052 "params": { 00:38:45.052 "impl_name": "posix" 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "sock_impl_set_options", 00:38:45.052 "params": { 00:38:45.052 "impl_name": "ssl", 00:38:45.052 "recv_buf_size": 4096, 00:38:45.052 "send_buf_size": 4096, 00:38:45.052 "enable_recv_pipe": true, 00:38:45.052 "enable_quickack": false, 00:38:45.052 "enable_placement_id": 0, 00:38:45.052 "enable_zerocopy_send_server": true, 00:38:45.052 "enable_zerocopy_send_client": false, 00:38:45.052 "zerocopy_threshold": 0, 00:38:45.052 "tls_version": 0, 00:38:45.052 "enable_ktls": false 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "sock_impl_set_options", 00:38:45.052 "params": { 00:38:45.052 "impl_name": "posix", 00:38:45.052 "recv_buf_size": 2097152, 00:38:45.052 "send_buf_size": 2097152, 00:38:45.052 "enable_recv_pipe": true, 00:38:45.052 "enable_quickack": false, 00:38:45.052 "enable_placement_id": 0, 00:38:45.052 "enable_zerocopy_send_server": true, 00:38:45.052 "enable_zerocopy_send_client": false, 00:38:45.052 "zerocopy_threshold": 0, 00:38:45.052 "tls_version": 0, 00:38:45.052 "enable_ktls": false 00:38:45.052 } 00:38:45.052 } 00:38:45.052 ] 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "subsystem": "vmd", 00:38:45.052 "config": [] 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "subsystem": "accel", 00:38:45.052 "config": [ 00:38:45.052 { 00:38:45.052 "method": "accel_set_options", 00:38:45.052 "params": { 00:38:45.052 "small_cache_size": 128, 00:38:45.052 "large_cache_size": 16, 00:38:45.052 "task_count": 2048, 00:38:45.052 "sequence_count": 2048, 00:38:45.052 "buf_count": 2048 00:38:45.052 } 00:38:45.052 } 00:38:45.052 ] 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "subsystem": "bdev", 00:38:45.052 "config": [ 00:38:45.052 { 00:38:45.052 "method": "bdev_set_options", 00:38:45.052 "params": { 00:38:45.052 "bdev_io_pool_size": 65535, 00:38:45.052 "bdev_io_cache_size": 256, 00:38:45.052 "bdev_auto_examine": true, 00:38:45.052 "iobuf_small_cache_size": 128, 00:38:45.052 "iobuf_large_cache_size": 16 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "bdev_raid_set_options", 00:38:45.052 "params": { 00:38:45.052 "process_window_size_kb": 1024, 00:38:45.052 "process_max_bandwidth_mb_sec": 0 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "bdev_iscsi_set_options", 00:38:45.052 "params": { 00:38:45.052 "timeout_sec": 30 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "bdev_nvme_set_options", 00:38:45.052 "params": { 00:38:45.052 "action_on_timeout": "none", 00:38:45.052 "timeout_us": 0, 00:38:45.052 "timeout_admin_us": 0, 00:38:45.052 "keep_alive_timeout_ms": 10000, 00:38:45.052 "arbitration_burst": 0, 00:38:45.052 "low_priority_weight": 0, 00:38:45.052 "medium_priority_weight": 0, 00:38:45.052 "high_priority_weight": 0, 00:38:45.052 "nvme_adminq_poll_period_us": 10000, 00:38:45.052 "nvme_ioq_poll_period_us": 0, 00:38:45.052 "io_queue_requests": 512, 00:38:45.052 "delay_cmd_submit": true, 00:38:45.052 "transport_retry_count": 4, 00:38:45.052 "bdev_retry_count": 3, 00:38:45.052 "transport_ack_timeout": 0, 00:38:45.052 "ctrlr_loss_timeout_sec": 0, 00:38:45.052 "reconnect_delay_sec": 0, 00:38:45.052 "fast_io_fail_timeout_sec": 0, 00:38:45.052 "disable_auto_failback": false, 00:38:45.052 "generate_uuids": false, 00:38:45.052 "transport_tos": 0, 00:38:45.052 "nvme_error_stat": false, 00:38:45.052 "rdma_srq_size": 0, 00:38:45.052 17:21:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:45.052 "io_path_stat": false, 00:38:45.052 "allow_accel_sequence": false, 00:38:45.052 "rdma_max_cq_size": 0, 00:38:45.052 "rdma_cm_event_timeout_ms": 0, 00:38:45.052 "dhchap_digests": [ 00:38:45.052 "sha256", 00:38:45.052 "sha384", 00:38:45.052 "sha512" 00:38:45.052 ], 00:38:45.052 "dhchap_dhgroups": [ 00:38:45.052 "null", 00:38:45.052 "ffdhe2048", 00:38:45.052 "ffdhe3072", 00:38:45.052 "ffdhe4096", 00:38:45.052 "ffdhe6144", 00:38:45.052 "ffdhe8192" 00:38:45.052 ] 00:38:45.052 } 00:38:45.052 }, 00:38:45.052 { 00:38:45.052 "method": "bdev_nvme_attach_controller", 00:38:45.052 "params": { 00:38:45.052 "name": "nvme0", 00:38:45.052 "trtype": "TCP", 00:38:45.052 "adrfam": "IPv4", 00:38:45.052 "traddr": "127.0.0.1", 00:38:45.052 "trsvcid": "4420", 00:38:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.053 "prchk_reftag": false, 00:38:45.053 "prchk_guard": false, 00:38:45.053 "ctrlr_loss_timeout_sec": 0, 00:38:45.053 "reconnect_delay_sec": 0, 00:38:45.053 "fast_io_fail_timeout_sec": 0, 00:38:45.053 "psk": "key0", 00:38:45.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.053 "hdgst": false, 00:38:45.053 "ddgst": false, 00:38:45.053 "multipath": "multipath" 00:38:45.053 } 00:38:45.053 }, 00:38:45.053 { 00:38:45.053 "method": "bdev_nvme_set_hotplug", 00:38:45.053 "params": { 00:38:45.053 "period_us": 100000, 00:38:45.053 "enable": false 00:38:45.053 } 00:38:45.053 }, 00:38:45.053 { 00:38:45.053 "method": "bdev_wait_for_examine" 00:38:45.053 } 00:38:45.053 ] 00:38:45.053 }, 00:38:45.053 { 00:38:45.053 "subsystem": "nbd", 00:38:45.053 "config": [] 00:38:45.053 } 00:38:45.053 ] 00:38:45.053 }' 00:38:45.313 [2024-11-20 17:21:37.261126] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:38:45.313 [2024-11-20 17:21:37.261207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299072 ] 00:38:45.313 [2024-11-20 17:21:37.344462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.313 [2024-11-20 17:21:37.374014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.574 [2024-11-20 17:21:37.517999] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:46.146 17:21:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.146 17:21:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:46.146 17:21:38 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:46.146 17:21:38 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:46.146 17:21:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.146 17:21:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:46.146 17:21:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:46.146 17:21:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:46.146 17:21:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.146 17:21:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.146 17:21:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:46.146 17:21:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.407 17:21:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:46.407 17:21:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:46.407 17:21:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:46.407 17:21:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.407 17:21:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.407 17:21:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.407 17:21:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:46.407 17:21:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:46.407 17:21:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:46.407 17:21:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:46.407 17:21:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:46.668 17:21:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:46.668 17:21:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:46.668 17:21:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IB8la5aX3b /tmp/tmp.NnlH0Gwdmc 00:38:46.668 17:21:38 keyring_file -- keyring/file.sh@20 -- # killprocess 2299072 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2299072 ']' 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2299072 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299072 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299072' 00:38:46.668 killing process with pid 2299072 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@973 -- # kill 2299072 00:38:46.668 Received shutdown signal, test time was about 1.000000 seconds 00:38:46.668 00:38:46.668 Latency(us) 00:38:46.668 [2024-11-20T16:21:38.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.668 [2024-11-20T16:21:38.844Z] =================================================================================================================== 00:38:46.668 [2024-11-20T16:21:38.844Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:46.668 17:21:38 keyring_file -- common/autotest_common.sh@978 -- # wait 2299072 00:38:46.929 17:21:38 keyring_file -- keyring/file.sh@21 -- # killprocess 2297186 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2297186 ']' 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2297186 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297186 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297186' 00:38:46.929 killing process with pid 2297186 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@973 -- # kill 2297186 00:38:46.929 17:21:38 keyring_file -- common/autotest_common.sh@978 -- # wait 2297186 00:38:47.192 00:38:47.192 real 0m12.070s 00:38:47.192 user 0m29.209s 00:38:47.192 sys 0m2.657s 00:38:47.192 17:21:39 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.192 17:21:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:47.192 ************************************ 00:38:47.192 END TEST keyring_file 00:38:47.192 ************************************ 00:38:47.192 17:21:39 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:47.192 17:21:39 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:47.192 17:21:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:47.192 17:21:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:47.192 17:21:39 -- common/autotest_common.sh@10 -- # set +x 00:38:47.192 ************************************ 00:38:47.192 START TEST keyring_linux 00:38:47.192 ************************************ 00:38:47.192 17:21:39 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:47.192 Joined session keyring: 582489143 00:38:47.192 * Looking for test storage... 00:38:47.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:47.192 17:21:39 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:47.192 17:21:39 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:47.192 17:21:39 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:47.454 17:21:39 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:47.454 17:21:39 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.454 17:21:39 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.454 --rc genhtml_branch_coverage=1 00:38:47.454 --rc genhtml_function_coverage=1 00:38:47.454 --rc genhtml_legend=1 00:38:47.454 --rc geninfo_all_blocks=1 00:38:47.454 --rc geninfo_unexecuted_blocks=1 00:38:47.454 00:38:47.454 ' 00:38:47.454 17:21:39 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.454 --rc genhtml_branch_coverage=1 00:38:47.454 --rc genhtml_function_coverage=1 00:38:47.454 --rc genhtml_legend=1 00:38:47.454 --rc geninfo_all_blocks=1 00:38:47.454 --rc geninfo_unexecuted_blocks=1 00:38:47.454 00:38:47.454 ' 00:38:47.454 17:21:39 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.454 --rc genhtml_branch_coverage=1 00:38:47.454 --rc genhtml_function_coverage=1 00:38:47.454 --rc genhtml_legend=1 00:38:47.454 --rc geninfo_all_blocks=1 00:38:47.454 --rc geninfo_unexecuted_blocks=1 00:38:47.454 00:38:47.454 ' 00:38:47.454 17:21:39 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.454 --rc genhtml_branch_coverage=1 00:38:47.454 --rc genhtml_function_coverage=1 00:38:47.454 --rc genhtml_legend=1 00:38:47.454 --rc geninfo_all_blocks=1 00:38:47.454 --rc geninfo_unexecuted_blocks=1 00:38:47.454 00:38:47.454 ' 00:38:47.454 17:21:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:47.454 17:21:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:47.454 17:21:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.454 17:21:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.454 17:21:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.454 17:21:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.455 17:21:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.455 17:21:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:47.455 17:21:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:47.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:47.455 /tmp/:spdk-test:key0 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:47.455 17:21:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:47.455 17:21:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:47.455 /tmp/:spdk-test:key1 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2299508 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2299508 00:38:47.455 17:21:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:47.455 17:21:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2299508 ']' 00:38:47.455 17:21:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.455 17:21:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.455 17:21:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.455 17:21:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.455 17:21:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:47.717 [2024-11-20 17:21:39.651125] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:38:47.717 [2024-11-20 17:21:39.651213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299508 ] 00:38:47.717 [2024-11-20 17:21:39.740953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.717 [2024-11-20 17:21:39.775799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.289 17:21:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.289 17:21:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:48.289 17:21:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:48.289 17:21:40 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.289 17:21:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.289 [2024-11-20 17:21:40.434456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.289 null0 00:38:48.551 [2024-11-20 17:21:40.466507] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:48.551 [2024-11-20 17:21:40.466868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.551 17:21:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:48.551 981946698 00:38:48.551 17:21:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:48.551 468472997 00:38:48.551 17:21:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2299843 00:38:48.551 17:21:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2299843 /var/tmp/bperf.sock 00:38:48.551 17:21:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2299843 ']' 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:48.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.551 17:21:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.551 [2024-11-20 17:21:40.544164] Starting SPDK v25.01-pre git sha1 325a79ea3 / DPDK 24.03.0 initialization... 00:38:48.551 [2024-11-20 17:21:40.544213] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299843 ] 00:38:48.551 [2024-11-20 17:21:40.625547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.551 [2024-11-20 17:21:40.655340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.493 17:21:41 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.493 17:21:41 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:49.493 17:21:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:49.493 17:21:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:49.493 17:21:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:49.493 17:21:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:49.754 17:21:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:49.754 17:21:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:49.754 [2024-11-20 17:21:41.873165] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:50.016 nvme0n1 00:38:50.016 17:21:41 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:50.016 17:21:41 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:50.016 17:21:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:50.016 17:21:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:50.016 17:21:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:50.016 17:21:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.016 17:21:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:50.016 17:21:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:50.016 17:21:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:50.016 17:21:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:50.016 17:21:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.016 17:21:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.016 17:21:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@25 -- # sn=981946698 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 981946698 == \9\8\1\9\4\6\6\9\8 ]] 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 981946698 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:50.276 17:21:42 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:50.276 Running I/O for 1 seconds... 00:38:51.661 24535.00 IOPS, 95.84 MiB/s 00:38:51.661 Latency(us) 00:38:51.661 [2024-11-20T16:21:43.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.661 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:51.661 nvme0n1 : 1.01 24534.02 95.84 0.00 0.00 5201.72 2170.88 6690.13 00:38:51.661 [2024-11-20T16:21:43.837Z] =================================================================================================================== 00:38:51.661 [2024-11-20T16:21:43.837Z] Total : 24534.02 95.84 0.00 0.00 5201.72 2170.88 6690.13 00:38:51.661 { 00:38:51.661 "results": [ 00:38:51.661 { 00:38:51.661 "job": "nvme0n1", 00:38:51.661 "core_mask": "0x2", 00:38:51.661 "workload": "randread", 00:38:51.661 "status": "finished", 00:38:51.661 "queue_depth": 128, 00:38:51.661 "io_size": 4096, 00:38:51.661 "runtime": 1.005257, 00:38:51.661 "iops": 24534.024632506913, 00:38:51.661 "mibps": 95.83603372073013, 00:38:51.661 "io_failed": 0, 00:38:51.661 "io_timeout": 0, 00:38:51.661 "avg_latency_us": 5201.718214329157, 00:38:51.661 "min_latency_us": 2170.88, 00:38:51.661 "max_latency_us": 6690.133333333333 00:38:51.661 } 00:38:51.661 ], 00:38:51.661 "core_count": 1 00:38:51.661 } 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:51.661 17:21:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:51.661 17:21:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:51.661 17:21:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:51.661 17:21:43 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.661 17:21:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.943 [2024-11-20 17:21:43.985873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:51.943 [2024-11-20 17:21:43.986653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2283ba0 (107): Transport endpoint is not connected 00:38:51.943 [2024-11-20 17:21:43.987648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2283ba0 (9): Bad file descriptor 00:38:51.943 [2024-11-20 17:21:43.988651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:51.943 [2024-11-20 17:21:43.988659] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:51.943 [2024-11-20 17:21:43.988665] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:51.943 [2024-11-20 17:21:43.988671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:51.943 request: 00:38:51.943 { 00:38:51.943 "name": "nvme0", 00:38:51.943 "trtype": "tcp", 00:38:51.943 "traddr": "127.0.0.1", 00:38:51.943 "adrfam": "ipv4", 00:38:51.943 "trsvcid": "4420", 00:38:51.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:51.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:51.943 "prchk_reftag": false, 00:38:51.943 "prchk_guard": false, 00:38:51.943 "hdgst": false, 00:38:51.943 "ddgst": false, 00:38:51.943 "psk": ":spdk-test:key1", 00:38:51.943 "allow_unrecognized_csi": false, 00:38:51.943 "method": "bdev_nvme_attach_controller", 00:38:51.943 "req_id": 1 00:38:51.943 } 00:38:51.943 Got JSON-RPC error response 00:38:51.943 response: 00:38:51.943 { 00:38:51.943 "code": -5, 00:38:51.943 "message": "Input/output error" 00:38:51.943 } 00:38:51.943 17:21:44 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:51.943 17:21:44 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:51.943 17:21:44 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:51.943 17:21:44 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@33 -- # sn=981946698 00:38:51.943 17:21:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 981946698 00:38:51.944 1 links removed 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@33 -- # sn=468472997 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 468472997 00:38:51.944 1 links removed 00:38:51.944 17:21:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2299843 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2299843 ']' 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2299843 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299843 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299843' 00:38:51.944 killing process with pid 2299843 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@973 -- # kill 2299843 00:38:51.944 Received shutdown signal, test time was about 1.000000 seconds 00:38:51.944 00:38:51.944 Latency(us) 00:38:51.944 [2024-11-20T16:21:44.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.944 [2024-11-20T16:21:44.120Z] =================================================================================================================== 00:38:51.944 [2024-11-20T16:21:44.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:51.944 17:21:44 keyring_linux -- common/autotest_common.sh@978 -- # wait 2299843 00:38:52.204 17:21:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2299508 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2299508 ']' 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2299508 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299508 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.204 17:21:44 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299508' 00:38:52.204 killing process with pid 2299508 00:38:52.205 17:21:44 keyring_linux -- common/autotest_common.sh@973 -- # kill 2299508 00:38:52.205 17:21:44 keyring_linux -- common/autotest_common.sh@978 -- # wait 2299508 00:38:52.466 00:38:52.466 real 0m5.196s 00:38:52.466 user 0m9.650s 00:38:52.466 sys 0m1.450s 00:38:52.466 17:21:44 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.466 17:21:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:52.466 ************************************ 00:38:52.466 END TEST keyring_linux 00:38:52.466 ************************************ 00:38:52.466 17:21:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:52.466 17:21:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:52.466 17:21:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:52.466 17:21:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:52.466 17:21:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:52.466 17:21:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:52.466 17:21:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:52.466 17:21:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:52.466 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:38:52.466 17:21:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:52.466 17:21:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:52.466 17:21:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:52.466 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:39:00.610 INFO: APP EXITING 00:39:00.610 INFO: killing all VMs 00:39:00.610 INFO: killing vhost app 00:39:00.610 WARN: no vhost pid file found 00:39:00.610 INFO: EXIT DONE 00:39:03.911 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:03.911 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:03.911 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:08.119 Cleaning 00:39:08.119 Removing: /var/run/dpdk/spdk0/config 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:08.119 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:08.119 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:08.119 Removing: /var/run/dpdk/spdk1/config 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:08.119 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:08.119 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:08.119 Removing: /var/run/dpdk/spdk2/config 00:39:08.119 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:08.119 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:08.119 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:08.119 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:08.119 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:08.119 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:08.120 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:08.120 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:08.120 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:08.120 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:08.120 Removing: /var/run/dpdk/spdk3/config 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:08.120 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:08.120 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:08.120 Removing: /var/run/dpdk/spdk4/config 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:08.120 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:08.120 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:08.120 Removing: /dev/shm/bdev_svc_trace.1 00:39:08.120 Removing: /dev/shm/nvmf_trace.0 00:39:08.120 Removing: /dev/shm/spdk_tgt_trace.pid1721183 00:39:08.120 Removing: /var/run/dpdk/spdk0 00:39:08.120 Removing: /var/run/dpdk/spdk1 00:39:08.120 Removing: /var/run/dpdk/spdk2 00:39:08.120 Removing: /var/run/dpdk/spdk3 00:39:08.120 Removing: /var/run/dpdk/spdk4 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1719692 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1721183 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1722030 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1723069 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1723409 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1724480 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1724659 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1724946 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1726086 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1726860 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1727231 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1727569 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1727937 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1728257 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1728524 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1728874 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1729265 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1730332 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1734038 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1734761 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1735178 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1735233 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1735706 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1735945 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1736321 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1736456 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1736702 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1736855 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1737061 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1737309 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1737844 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1738161 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1738474 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1743135 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1748489 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1760503 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1761294 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1766367 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1766820 00:39:08.120 Removing: /var/run/dpdk/spdk_pid1772117 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1779203 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1782306 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1795573 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1806449 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1808607 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1809805 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1830713 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1835575 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1892068 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1899043 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1906116 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1914125 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1914133 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1915139 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1916139 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1917153 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1917822 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1917824 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1918162 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1918172 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1918174 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1919195 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1920206 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1921278 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1922196 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1922209 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1922535 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1923878 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1925063 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1935041 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1969500 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1974913 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1976906 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1979047 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1979341 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1979725 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1980068 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1980784 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1983538 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1984681 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1985383 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1987889 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1988493 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1989365 00:39:08.381 Removing: /var/run/dpdk/spdk_pid1994276 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2000978 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2000979 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2000980 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2005668 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2015908 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2020725 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2028003 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2029646 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2031770 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2033545 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2039085 00:39:08.381 Removing: /var/run/dpdk/spdk_pid2044534 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2049423 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2058671 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2058680 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2063740 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2064069 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2064397 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2064749 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2064754 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2070383 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2070958 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2076459 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2079602 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2086319 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2093569 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2103622 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2112412 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2112458 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2135411 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2136096 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2136804 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2137705 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2138958 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2140007 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2140782 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2141467 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2146560 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2146875 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2154135 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2154297 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2160837 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2166108 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2177479 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2178146 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2183202 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2183591 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2188714 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2196060 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2198983 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2211194 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2221836 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2223830 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2224886 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2245128 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2249833 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2253209 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2260732 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2260808 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2266910 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2269114 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2271332 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2272813 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2275022 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2276537 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2286496 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2287159 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2287760 00:39:08.643 Removing: /var/run/dpdk/spdk_pid2290664 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2291230 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2292313 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2297186 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2297262 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2299072 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2299508 00:39:08.918 Removing: /var/run/dpdk/spdk_pid2299843 00:39:08.918 Clean 00:39:08.918 17:22:00 -- common/autotest_common.sh@1453 -- # return 0 00:39:08.918 17:22:00 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:08.918 17:22:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.918 17:22:00 -- common/autotest_common.sh@10 -- # set +x 00:39:08.918 17:22:00 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:08.918 17:22:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.918 17:22:00 -- common/autotest_common.sh@10 -- # set +x 00:39:08.918 17:22:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:08.918 17:22:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:08.918 17:22:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:08.918 17:22:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:08.918 17:22:01 -- spdk/autotest.sh@398 -- # hostname 00:39:08.918 17:22:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:09.190 geninfo: WARNING: invalid characters removed from testname! 00:39:35.766 17:22:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:37.679 17:22:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:39.584 17:22:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:40.964 17:22:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:42.873 17:22:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:44.323 17:22:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:46.235 17:22:37 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:46.235 17:22:37 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:46.235 17:22:37 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:46.235 17:22:37 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:46.235 17:22:37 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:46.235 17:22:37 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:46.235 + [[ -n 1634292 ]] 00:39:46.235 + sudo kill 1634292 00:39:46.246 [Pipeline] } 00:39:46.259 [Pipeline] // stage 00:39:46.263 [Pipeline] } 00:39:46.283 [Pipeline] // timeout 00:39:46.286 [Pipeline] } 00:39:46.298 [Pipeline] // catchError 00:39:46.302 [Pipeline] } 00:39:46.312 [Pipeline] // wrap 00:39:46.317 [Pipeline] } 00:39:46.326 [Pipeline] // catchError 00:39:46.333 [Pipeline] stage 00:39:46.335 [Pipeline] { (Epilogue) 00:39:46.345 [Pipeline] catchError 00:39:46.347 [Pipeline] { 00:39:46.360 [Pipeline] echo 00:39:46.361 Cleanup processes 00:39:46.367 [Pipeline] sh 00:39:46.658 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.659 2312887 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.672 [Pipeline] sh 00:39:46.961 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.961 ++ grep -v 'sudo pgrep' 00:39:46.961 ++ awk '{print $1}' 00:39:46.961 + sudo kill -9 00:39:46.961 + true 00:39:46.975 [Pipeline] sh 00:39:47.269 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:59.558 [Pipeline] sh 00:39:59.848 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:59.848 Artifacts sizes are good 00:39:59.863 [Pipeline] archiveArtifacts 00:39:59.871 Archiving artifacts 00:40:00.008 [Pipeline] sh 00:40:00.302 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:00.319 [Pipeline] cleanWs 00:40:00.330 [WS-CLEANUP] Deleting project workspace... 00:40:00.330 [WS-CLEANUP] Deferred wipeout is used... 00:40:00.337 [WS-CLEANUP] done 00:40:00.339 [Pipeline] } 00:40:00.357 [Pipeline] // catchError 00:40:00.371 [Pipeline] sh 00:40:00.659 + logger -p user.info -t JENKINS-CI 00:40:00.679 [Pipeline] } 00:40:00.692 [Pipeline] // stage 00:40:00.697 [Pipeline] } 00:40:00.711 [Pipeline] // node 00:40:00.716 [Pipeline] End of Pipeline 00:40:00.749 Finished: SUCCESS